Oct 5 02:42:05 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Oct 5 02:42:05 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Oct 5 02:42:05 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 5 02:42:05 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 5 02:42:05 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 5 02:42:05 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 5 02:42:05 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 5 02:42:05 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 5 02:42:05 localhost kernel: signal: max sigframe size: 1776 Oct 5 02:42:05 localhost kernel: BIOS-provided physical RAM map: Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 5 02:42:05 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Oct 5 02:42:05 localhost kernel: NX (Execute Disable) protection: active Oct 5 02:42:05 localhost kernel: SMBIOS 2.8 present. Oct 5 02:42:05 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 5 02:42:05 localhost kernel: Hypervisor detected: KVM Oct 5 02:42:05 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 5 02:42:05 localhost kernel: kvm-clock: using sched offset of 2954017453 cycles Oct 5 02:42:05 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 5 02:42:05 localhost kernel: tsc: Detected 2799.998 MHz processor Oct 5 02:42:05 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Oct 5 02:42:05 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 5 02:42:05 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Oct 5 02:42:05 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Oct 5 02:42:05 localhost kernel: Using GB pages for direct mapping Oct 5 02:42:05 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Oct 5 02:42:05 localhost kernel: ACPI: Early table checksum verification disabled Oct 5 02:42:05 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 5 02:42:05 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:05 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:05 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:05 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Oct 5 02:42:05 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:05 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 5 02:42:05 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Oct 5 02:42:05 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Oct 5 02:42:05 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Oct 5 02:42:05 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Oct 5 02:42:05 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Oct 5 02:42:05 localhost kernel: No NUMA configuration found Oct 5 02:42:05 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Oct 5 02:42:05 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Oct 5 02:42:05 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Oct 5 02:42:05 localhost kernel: Zone ranges: Oct 5 02:42:05 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 5 02:42:05 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 5 02:42:05 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Oct 5 02:42:05 localhost kernel: Device empty Oct 5 02:42:05 localhost kernel: Movable zone start for each node Oct 5 02:42:05 localhost kernel: Early memory node ranges Oct 5 02:42:05 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 5 02:42:05 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Oct 5 02:42:05 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Oct 5 02:42:05 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Oct 5 02:42:05 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 5 02:42:05 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 5 02:42:05 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Oct 5 02:42:05 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Oct 5 02:42:05 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 5 02:42:05 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 5 02:42:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 5 02:42:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 5 02:42:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 5 02:42:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 5 02:42:05 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 5 02:42:05 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 5 02:42:05 localhost kernel: TSC deadline timer available Oct 5 02:42:05 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Oct 5 02:42:05 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Oct 5 02:42:05 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Oct 5 02:42:05 localhost kernel: Booting paravirtualized kernel on KVM Oct 5 02:42:05 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 5 02:42:05 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Oct 5 02:42:05 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Oct 5 02:42:05 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Oct 5 02:42:05 localhost kernel: Fallback order for Node 0: 0 Oct 5 02:42:05 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Oct 5 02:42:05 localhost kernel: Policy zone: Normal Oct 5 02:42:05 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 5 02:42:05 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Oct 5 02:42:05 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Oct 5 02:42:05 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 5 02:42:05 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 5 02:42:05 localhost kernel: software IO TLB: area num 8. Oct 5 02:42:05 localhost kernel: Memory: 2873460K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Oct 5 02:42:05 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Oct 5 02:42:05 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Oct 5 02:42:05 localhost kernel: ftrace: allocating 44803 entries in 176 pages Oct 5 02:42:05 localhost kernel: ftrace: allocated 176 pages with 3 groups Oct 5 02:42:05 localhost kernel: Dynamic Preempt: voluntary Oct 5 02:42:05 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Oct 5 02:42:05 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Oct 5 02:42:05 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Oct 5 02:42:05 localhost kernel: #011Rude variant of Tasks RCU enabled. Oct 5 02:42:05 localhost kernel: #011Tracing variant of Tasks RCU enabled. Oct 5 02:42:05 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 5 02:42:05 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Oct 5 02:42:05 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Oct 5 02:42:05 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 5 02:42:05 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Oct 5 02:42:05 localhost kernel: random: crng init done (trusting CPU's manufacturer) Oct 5 02:42:05 localhost kernel: Console: colour VGA+ 80x25 Oct 5 02:42:05 localhost kernel: printk: console [tty0] enabled Oct 5 02:42:05 localhost kernel: printk: console [ttyS0] enabled Oct 5 02:42:05 localhost kernel: ACPI: Core revision 20211217 Oct 5 02:42:05 localhost kernel: APIC: Switch to symmetric I/O mode setup Oct 5 02:42:05 localhost kernel: x2apic enabled Oct 5 02:42:05 localhost kernel: Switched APIC routing to physical x2apic. Oct 5 02:42:05 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 5 02:42:05 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Oct 5 02:42:05 localhost kernel: pid_max: default: 32768 minimum: 301 Oct 5 02:42:05 localhost kernel: LSM: Security Framework initializing Oct 5 02:42:05 localhost kernel: Yama: becoming mindful. Oct 5 02:42:05 localhost kernel: SELinux: Initializing. Oct 5 02:42:05 localhost kernel: LSM support for eBPF active Oct 5 02:42:05 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 5 02:42:05 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 5 02:42:05 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 5 02:42:05 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 5 02:42:05 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 5 02:42:05 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 5 02:42:05 localhost kernel: Spectre V2 : Mitigation: Retpolines Oct 5 02:42:05 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 5 02:42:05 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 5 02:42:05 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 5 02:42:05 localhost kernel: RETBleed: Mitigation: untrained return thunk Oct 5 02:42:05 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 5 02:42:05 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 5 02:42:05 localhost kernel: Freeing SMP alternatives memory: 36K Oct 5 02:42:05 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 5 02:42:05 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Oct 5 02:42:05 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 5 02:42:05 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 5 02:42:05 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Oct 5 02:42:05 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 5 02:42:05 localhost kernel: ... version: 0 Oct 5 02:42:05 localhost kernel: ... bit width: 48 Oct 5 02:42:05 localhost kernel: ... generic registers: 6 Oct 5 02:42:05 localhost kernel: ... value mask: 0000ffffffffffff Oct 5 02:42:05 localhost kernel: ... max period: 00007fffffffffff Oct 5 02:42:05 localhost kernel: ... fixed-purpose events: 0 Oct 5 02:42:05 localhost kernel: ... event mask: 000000000000003f Oct 5 02:42:05 localhost kernel: rcu: Hierarchical SRCU implementation. Oct 5 02:42:05 localhost kernel: rcu: #011Max phase no-delay instances is 400. Oct 5 02:42:05 localhost kernel: smp: Bringing up secondary CPUs ... Oct 5 02:42:05 localhost kernel: x86: Booting SMP configuration: Oct 5 02:42:05 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Oct 5 02:42:05 localhost kernel: smp: Brought up 1 node, 8 CPUs Oct 5 02:42:05 localhost kernel: smpboot: Max logical packages: 8 Oct 5 02:42:05 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Oct 5 02:42:05 localhost kernel: node 0 deferred pages initialised in 25ms Oct 5 02:42:05 localhost kernel: devtmpfs: initialized Oct 5 02:42:05 localhost kernel: x86/mm: Memory block size: 128MB Oct 5 02:42:05 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 5 02:42:05 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Oct 5 02:42:05 localhost kernel: pinctrl core: initialized pinctrl subsystem Oct 5 02:42:05 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 5 02:42:05 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Oct 5 02:42:05 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 5 02:42:05 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 5 02:42:05 localhost kernel: audit: initializing netlink subsys (disabled) Oct 5 02:42:05 localhost kernel: audit: type=2000 audit(1759646524.036:1): state=initialized audit_enabled=0 res=1 Oct 5 02:42:05 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Oct 5 02:42:05 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 5 02:42:05 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Oct 5 02:42:05 localhost kernel: cpuidle: using governor menu Oct 5 02:42:05 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Oct 5 02:42:05 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 5 02:42:05 localhost kernel: PCI: Using configuration type 1 for base access Oct 5 02:42:05 localhost kernel: PCI: Using configuration type 1 for extended access Oct 5 02:42:05 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 5 02:42:05 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Oct 5 02:42:05 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 5 02:42:05 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 5 02:42:05 localhost kernel: cryptd: max_cpu_qlen set to 1000 Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(Module Device) Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(Processor Device) Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 5 02:42:05 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 5 02:42:05 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 5 02:42:05 localhost kernel: ACPI: Interpreter enabled Oct 5 02:42:05 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Oct 5 02:42:05 localhost kernel: ACPI: Using IOAPIC for interrupt routing Oct 5 02:42:05 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 5 02:42:05 localhost kernel: PCI: Using E820 reservations for host bridge windows Oct 5 02:42:05 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 5 02:42:05 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 5 02:42:05 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Oct 5 02:42:05 localhost kernel: acpiphp: Slot [3] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [4] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [5] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [6] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [7] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [8] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [9] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [10] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [11] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [12] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [13] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [14] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [15] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [16] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [17] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [18] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [19] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [20] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [21] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [22] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [23] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [24] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [25] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [26] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [27] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [28] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [29] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [30] registered Oct 5 02:42:05 localhost kernel: acpiphp: Slot [31] registered Oct 5 02:42:05 localhost kernel: PCI host bridge to bus 0000:00 Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 5 02:42:05 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 5 02:42:05 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 5 02:42:05 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 5 02:42:05 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 5 02:42:05 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 5 02:42:05 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 5 02:42:05 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 5 02:42:05 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 5 02:42:05 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 5 02:42:05 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 5 02:42:05 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 5 02:42:05 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 5 02:42:05 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 5 02:42:05 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 5 02:42:05 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 5 02:42:05 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 5 02:42:05 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 5 02:42:05 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Oct 5 02:42:05 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 5 02:42:05 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 5 02:42:05 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 5 02:42:05 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 5 02:42:05 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 5 02:42:05 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 5 02:42:05 localhost kernel: iommu: Default domain type: Translated Oct 5 02:42:05 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 5 02:42:05 localhost kernel: SCSI subsystem initialized Oct 5 02:42:05 localhost kernel: ACPI: bus type USB registered Oct 5 02:42:05 localhost kernel: usbcore: registered new interface driver usbfs Oct 5 02:42:05 localhost kernel: usbcore: registered new interface driver hub Oct 5 02:42:05 localhost kernel: usbcore: registered new device driver usb Oct 5 02:42:05 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Oct 5 02:42:05 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 5 02:42:05 localhost kernel: PTP clock support registered Oct 5 02:42:05 localhost kernel: EDAC MC: Ver: 3.0.0 Oct 5 02:42:05 localhost kernel: NetLabel: Initializing Oct 5 02:42:05 localhost kernel: NetLabel: domain hash size = 128 Oct 5 02:42:05 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Oct 5 02:42:05 localhost kernel: NetLabel: unlabeled traffic allowed by default Oct 5 02:42:05 localhost kernel: PCI: Using ACPI for IRQ routing Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 5 02:42:05 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 5 02:42:05 localhost kernel: vgaarb: loaded Oct 5 02:42:05 localhost kernel: clocksource: Switched to clocksource kvm-clock Oct 5 02:42:05 localhost kernel: VFS: Disk quotas dquot_6.6.0 Oct 5 02:42:05 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 5 02:42:05 localhost kernel: pnp: PnP ACPI init Oct 5 02:42:05 localhost kernel: pnp: PnP ACPI: found 5 devices Oct 5 02:42:05 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 5 02:42:05 localhost kernel: NET: Registered PF_INET protocol family Oct 5 02:42:05 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 5 02:42:05 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Oct 5 02:42:05 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 5 02:42:05 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 5 02:42:05 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 5 02:42:05 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Oct 5 02:42:05 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Oct 5 02:42:05 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Oct 5 02:42:05 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Oct 5 02:42:05 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 5 02:42:05 localhost kernel: NET: Registered PF_XDP protocol family Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Oct 5 02:42:05 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Oct 5 02:42:05 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 5 02:42:05 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 5 02:42:05 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 5 02:42:05 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 27648 usecs Oct 5 02:42:05 localhost kernel: PCI: CLS 0 bytes, default 64 Oct 5 02:42:05 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 5 02:42:05 localhost kernel: Trying to unpack rootfs image as initramfs... Oct 5 02:42:05 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Oct 5 02:42:05 localhost kernel: ACPI: bus type thunderbolt registered Oct 5 02:42:05 localhost kernel: Initialise system trusted keyrings Oct 5 02:42:05 localhost kernel: Key type blacklist registered Oct 5 02:42:05 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Oct 5 02:42:05 localhost kernel: zbud: loaded Oct 5 02:42:05 localhost kernel: integrity: Platform Keyring initialized Oct 5 02:42:05 localhost kernel: NET: Registered PF_ALG protocol family Oct 5 02:42:05 localhost kernel: xor: automatically using best checksumming function avx Oct 5 02:42:05 localhost kernel: Key type asymmetric registered Oct 5 02:42:05 localhost kernel: Asymmetric key parser 'x509' registered Oct 5 02:42:05 localhost kernel: Running certificate verification selftests Oct 5 02:42:05 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Oct 5 02:42:05 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Oct 5 02:42:05 localhost kernel: io scheduler mq-deadline registered Oct 5 02:42:05 localhost kernel: io scheduler kyber registered Oct 5 02:42:05 localhost kernel: io scheduler bfq registered Oct 5 02:42:05 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Oct 5 02:42:05 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Oct 5 02:42:05 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Oct 5 02:42:05 localhost kernel: ACPI: button: Power Button [PWRF] Oct 5 02:42:05 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 5 02:42:05 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 5 02:42:05 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 5 02:42:05 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 5 02:42:05 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 5 02:42:05 localhost kernel: Non-volatile memory driver v1.3 Oct 5 02:42:05 localhost kernel: rdac: device handler registered Oct 5 02:42:05 localhost kernel: hp_sw: device handler registered Oct 5 02:42:05 localhost kernel: emc: device handler registered Oct 5 02:42:05 localhost kernel: alua: device handler registered Oct 5 02:42:05 localhost kernel: libphy: Fixed MDIO Bus: probed Oct 5 02:42:05 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Oct 5 02:42:05 localhost kernel: ehci-pci: EHCI PCI platform driver Oct 5 02:42:05 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Oct 5 02:42:05 localhost kernel: ohci-pci: OHCI PCI platform driver Oct 5 02:42:05 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Oct 5 02:42:05 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 5 02:42:05 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 5 02:42:05 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 5 02:42:05 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Oct 5 02:42:05 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Oct 5 02:42:05 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Oct 5 02:42:05 localhost kernel: usb usb1: Product: UHCI Host Controller Oct 5 02:42:05 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Oct 5 02:42:05 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Oct 5 02:42:05 localhost kernel: hub 1-0:1.0: USB hub found Oct 5 02:42:05 localhost kernel: hub 1-0:1.0: 2 ports detected Oct 5 02:42:05 localhost kernel: usbcore: registered new interface driver usbserial_generic Oct 5 02:42:05 localhost kernel: usbserial: USB Serial support registered for generic Oct 5 02:42:05 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 5 02:42:05 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 5 02:42:05 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 5 02:42:05 localhost kernel: mousedev: PS/2 mouse device common for all mice Oct 5 02:42:05 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 5 02:42:05 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Oct 5 02:42:05 localhost kernel: rtc_cmos 00:04: registered as rtc0 Oct 5 02:42:05 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-05T06:42:04 UTC (1759646524) Oct 5 02:42:05 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 5 02:42:05 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Oct 5 02:42:05 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Oct 5 02:42:05 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Oct 5 02:42:05 localhost kernel: usbcore: registered new interface driver usbhid Oct 5 02:42:05 localhost kernel: usbhid: USB HID core driver Oct 5 02:42:05 localhost kernel: drop_monitor: Initializing network drop monitor service Oct 5 02:42:05 localhost kernel: Initializing XFRM netlink socket Oct 5 02:42:05 localhost kernel: NET: Registered PF_INET6 protocol family Oct 5 02:42:05 localhost kernel: Segment Routing with IPv6 Oct 5 02:42:05 localhost kernel: NET: Registered PF_PACKET protocol family Oct 5 02:42:05 localhost kernel: mpls_gso: MPLS GSO support Oct 5 02:42:05 localhost kernel: IPI shorthand broadcast: enabled Oct 5 02:42:05 localhost kernel: AVX2 version of gcm_enc/dec engaged. Oct 5 02:42:05 localhost kernel: AES CTR mode by8 optimization enabled Oct 5 02:42:05 localhost kernel: sched_clock: Marking stable (674247467, 184547046)->(995846396, -137051883) Oct 5 02:42:05 localhost kernel: registered taskstats version 1 Oct 5 02:42:05 localhost kernel: Loading compiled-in X.509 certificates Oct 5 02:42:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Oct 5 02:42:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Oct 5 02:42:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Oct 5 02:42:05 localhost kernel: zswap: loaded using pool lzo/zbud Oct 5 02:42:05 localhost kernel: page_owner is disabled Oct 5 02:42:05 localhost kernel: Key type big_key registered Oct 5 02:42:05 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Oct 5 02:42:05 localhost kernel: Freeing initrd memory: 74232K Oct 5 02:42:05 localhost kernel: Key type encrypted registered Oct 5 02:42:05 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Oct 5 02:42:05 localhost kernel: Loading compiled-in module X.509 certificates Oct 5 02:42:05 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Oct 5 02:42:05 localhost kernel: ima: Allocated hash algorithm: sha256 Oct 5 02:42:05 localhost kernel: ima: No architecture policies found Oct 5 02:42:05 localhost kernel: evm: Initialising EVM extended attributes: Oct 5 02:42:05 localhost kernel: evm: security.selinux Oct 5 02:42:05 localhost kernel: evm: security.SMACK64 (disabled) Oct 5 02:42:05 localhost kernel: evm: security.SMACK64EXEC (disabled) Oct 5 02:42:05 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Oct 5 02:42:05 localhost kernel: evm: security.SMACK64MMAP (disabled) Oct 5 02:42:05 localhost kernel: evm: security.apparmor (disabled) Oct 5 02:42:05 localhost kernel: evm: security.ima Oct 5 02:42:05 localhost kernel: evm: security.capability Oct 5 02:42:05 localhost kernel: evm: HMAC attrs: 0x1 Oct 5 02:42:05 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Oct 5 02:42:05 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Oct 5 02:42:05 localhost kernel: usb 1-1: Product: QEMU USB Tablet Oct 5 02:42:05 localhost kernel: usb 1-1: Manufacturer: QEMU Oct 5 02:42:05 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Oct 5 02:42:05 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Oct 5 02:42:05 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Oct 5 02:42:05 localhost kernel: Freeing unused decrypted memory: 2036K Oct 5 02:42:05 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Oct 5 02:42:05 localhost kernel: Write protecting the kernel read-only data: 26624k Oct 5 02:42:05 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 5 02:42:05 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Oct 5 02:42:05 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Oct 5 02:42:05 localhost kernel: Run /init as init process Oct 5 02:42:05 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 5 02:42:05 localhost systemd[1]: Detected virtualization kvm. Oct 5 02:42:05 localhost systemd[1]: Detected architecture x86-64. Oct 5 02:42:05 localhost systemd[1]: Running in initrd. Oct 5 02:42:05 localhost systemd[1]: No hostname configured, using default hostname. Oct 5 02:42:05 localhost systemd[1]: Hostname set to . Oct 5 02:42:05 localhost systemd[1]: Initializing machine ID from VM UUID. Oct 5 02:42:05 localhost systemd[1]: Queued start job for default target Initrd Default Target. Oct 5 02:42:05 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Oct 5 02:42:05 localhost systemd[1]: Reached target Local Encrypted Volumes. Oct 5 02:42:05 localhost systemd[1]: Reached target Initrd /usr File System. Oct 5 02:42:05 localhost systemd[1]: Reached target Local File Systems. Oct 5 02:42:05 localhost systemd[1]: Reached target Path Units. Oct 5 02:42:05 localhost systemd[1]: Reached target Slice Units. Oct 5 02:42:05 localhost systemd[1]: Reached target Swaps. Oct 5 02:42:05 localhost systemd[1]: Reached target Timer Units. Oct 5 02:42:05 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Oct 5 02:42:05 localhost systemd[1]: Listening on Journal Socket (/dev/log). Oct 5 02:42:05 localhost systemd[1]: Listening on Journal Socket. Oct 5 02:42:05 localhost systemd[1]: Listening on udev Control Socket. Oct 5 02:42:05 localhost systemd[1]: Listening on udev Kernel Socket. Oct 5 02:42:05 localhost systemd[1]: Reached target Socket Units. Oct 5 02:42:05 localhost systemd[1]: Starting Create List of Static Device Nodes... Oct 5 02:42:05 localhost systemd[1]: Starting Journal Service... Oct 5 02:42:05 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 02:42:05 localhost systemd[1]: Starting Create System Users... Oct 5 02:42:05 localhost systemd[1]: Starting Setup Virtual Console... Oct 5 02:42:05 localhost systemd[1]: Finished Create List of Static Device Nodes. Oct 5 02:42:05 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 02:42:05 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 02:42:05 localhost systemd-journald[281]: Journal started Oct 5 02:42:05 localhost systemd-journald[281]: Runtime Journal (/run/log/journal/26eb4766c6624233bdfd7faae464b2de) is 8.0M, max 314.7M, 306.7M free. Oct 5 02:42:05 localhost systemd-modules-load[282]: Module 'msr' is built in Oct 5 02:42:05 localhost systemd[1]: Started Journal Service. Oct 5 02:42:05 localhost systemd[1]: Finished Setup Virtual Console. Oct 5 02:42:05 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 02:42:05 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Oct 5 02:42:05 localhost systemd[1]: Starting dracut cmdline hook... Oct 5 02:42:05 localhost systemd-sysusers[283]: Creating group 'sgx' with GID 997. Oct 5 02:42:05 localhost systemd-sysusers[283]: Creating group 'users' with GID 100. Oct 5 02:42:05 localhost systemd-sysusers[283]: Creating group 'dbus' with GID 81. Oct 5 02:42:05 localhost systemd-sysusers[283]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Oct 5 02:42:05 localhost systemd[1]: Finished Create System Users. Oct 5 02:42:05 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Oct 5 02:42:05 localhost systemd[1]: Starting Create Volatile Files and Directories... Oct 5 02:42:05 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Oct 5 02:42:05 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Oct 5 02:42:05 localhost systemd[1]: Finished Create Volatile Files and Directories. Oct 5 02:42:05 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Oct 5 02:42:05 localhost systemd[1]: Finished dracut cmdline hook. Oct 5 02:42:05 localhost systemd[1]: Starting dracut pre-udev hook... Oct 5 02:42:05 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 5 02:42:05 localhost kernel: device-mapper: uevent: version 1.0.3 Oct 5 02:42:05 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Oct 5 02:42:05 localhost kernel: RPC: Registered named UNIX socket transport module. Oct 5 02:42:05 localhost kernel: RPC: Registered udp transport module. Oct 5 02:42:05 localhost kernel: RPC: Registered tcp transport module. Oct 5 02:42:05 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Oct 5 02:42:05 localhost rpc.statd[405]: Version 2.5.4 starting Oct 5 02:42:05 localhost rpc.statd[405]: Initializing NSM state Oct 5 02:42:05 localhost rpc.idmapd[410]: Setting log level to 0 Oct 5 02:42:05 localhost systemd[1]: Finished dracut pre-udev hook. Oct 5 02:42:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 5 02:42:05 localhost systemd-udevd[423]: Using default interface naming scheme 'rhel-9.0'. Oct 5 02:42:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 5 02:42:05 localhost systemd[1]: Starting dracut pre-trigger hook... Oct 5 02:42:05 localhost systemd[1]: Finished dracut pre-trigger hook. Oct 5 02:42:05 localhost systemd[1]: Starting Coldplug All udev Devices... Oct 5 02:42:05 localhost systemd[1]: Finished Coldplug All udev Devices. Oct 5 02:42:05 localhost systemd[1]: Reached target System Initialization. Oct 5 02:42:05 localhost systemd[1]: Reached target Basic System. Oct 5 02:42:05 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Oct 5 02:42:05 localhost systemd[1]: Reached target Network. Oct 5 02:42:05 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Oct 5 02:42:05 localhost systemd[1]: Starting dracut initqueue hook... Oct 5 02:42:05 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Oct 5 02:42:05 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 5 02:42:05 localhost kernel: GPT:20971519 != 838860799 Oct 5 02:42:05 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Oct 5 02:42:05 localhost kernel: GPT:20971519 != 838860799 Oct 5 02:42:05 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Oct 5 02:42:05 localhost kernel: vda: vda1 vda2 vda3 vda4 Oct 5 02:42:05 localhost systemd-udevd[437]: Network interface NamePolicy= disabled on kernel command line. Oct 5 02:42:06 localhost kernel: scsi host0: ata_piix Oct 5 02:42:06 localhost kernel: scsi host1: ata_piix Oct 5 02:42:06 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Oct 5 02:42:06 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Oct 5 02:42:06 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Oct 5 02:42:06 localhost systemd[1]: Reached target Initrd Root Device. Oct 5 02:42:06 localhost kernel: ata1: found unknown device (class 0) Oct 5 02:42:06 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 5 02:42:06 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 5 02:42:06 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Oct 5 02:42:06 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 5 02:42:06 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 5 02:42:06 localhost systemd[1]: Finished dracut initqueue hook. Oct 5 02:42:06 localhost systemd[1]: Reached target Preparation for Remote File Systems. Oct 5 02:42:06 localhost systemd[1]: Reached target Remote Encrypted Volumes. Oct 5 02:42:06 localhost systemd[1]: Reached target Remote File Systems. Oct 5 02:42:06 localhost systemd[1]: Starting dracut pre-mount hook... Oct 5 02:42:06 localhost systemd[1]: Finished dracut pre-mount hook. Oct 5 02:42:06 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Oct 5 02:42:06 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system. Oct 5 02:42:06 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Oct 5 02:42:06 localhost systemd[1]: Mounting /sysroot... Oct 5 02:42:06 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Oct 5 02:42:06 localhost kernel: XFS (vda4): Mounting V5 Filesystem Oct 5 02:42:06 localhost kernel: XFS (vda4): Ending clean mount Oct 5 02:42:06 localhost systemd[1]: Mounted /sysroot. Oct 5 02:42:06 localhost systemd[1]: Reached target Initrd Root File System. Oct 5 02:42:06 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Oct 5 02:42:06 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 5 02:42:06 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Oct 5 02:42:06 localhost systemd[1]: Reached target Initrd File Systems. Oct 5 02:42:06 localhost systemd[1]: Reached target Initrd Default Target. Oct 5 02:42:06 localhost systemd[1]: Starting dracut mount hook... Oct 5 02:42:06 localhost systemd[1]: Finished dracut mount hook. Oct 5 02:42:06 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Oct 5 02:42:06 localhost rpc.idmapd[410]: exiting on signal 15 Oct 5 02:42:06 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Oct 5 02:42:06 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Oct 5 02:42:06 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Oct 5 02:42:06 localhost systemd[1]: Stopped target Network. Oct 5 02:42:06 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Oct 5 02:42:06 localhost systemd[1]: Stopped target Timer Units. Oct 5 02:42:06 localhost systemd[1]: dbus.socket: Deactivated successfully. Oct 5 02:42:06 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Oct 5 02:42:06 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 5 02:42:06 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Oct 5 02:42:06 localhost systemd[1]: Stopped target Initrd Default Target. Oct 5 02:42:06 localhost systemd[1]: Stopped target Basic System. Oct 5 02:42:06 localhost systemd[1]: Stopped target Initrd Root Device. Oct 5 02:42:06 localhost systemd[1]: Stopped target Initrd /usr File System. Oct 5 02:42:06 localhost systemd[1]: Stopped target Path Units. Oct 5 02:42:06 localhost systemd[1]: Stopped target Remote File Systems. Oct 5 02:42:06 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Oct 5 02:42:07 localhost systemd[1]: Stopped target Slice Units. Oct 5 02:42:07 localhost systemd[1]: Stopped target Socket Units. Oct 5 02:42:07 localhost systemd[1]: Stopped target System Initialization. Oct 5 02:42:07 localhost systemd[1]: Stopped target Local File Systems. Oct 5 02:42:07 localhost systemd[1]: Stopped target Swaps. Oct 5 02:42:07 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped dracut mount hook. Oct 5 02:42:07 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped dracut pre-mount hook. Oct 5 02:42:07 localhost systemd[1]: Stopped target Local Encrypted Volumes. Oct 5 02:42:07 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Oct 5 02:42:07 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped dracut initqueue hook. Oct 5 02:42:07 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 5 02:42:07 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 02:42:07 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Create Volatile Files and Directories. Oct 5 02:42:07 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Coldplug All udev Devices. Oct 5 02:42:07 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped dracut pre-trigger hook. Oct 5 02:42:07 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Oct 5 02:42:07 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Setup Virtual Console. Oct 5 02:42:07 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Oct 5 02:42:07 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Oct 5 02:42:07 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Closed udev Control Socket. Oct 5 02:42:07 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Closed udev Kernel Socket. Oct 5 02:42:07 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped dracut pre-udev hook. Oct 5 02:42:07 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped dracut cmdline hook. Oct 5 02:42:07 localhost systemd[1]: Starting Cleanup udev Database... Oct 5 02:42:07 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Oct 5 02:42:07 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Create List of Static Device Nodes. Oct 5 02:42:07 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Stopped Create System Users. Oct 5 02:42:07 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 5 02:42:07 localhost systemd[1]: Finished Cleanup udev Database. Oct 5 02:42:07 localhost systemd[1]: Reached target Switch Root. Oct 5 02:42:07 localhost systemd[1]: Starting Switch Root... Oct 5 02:42:07 localhost systemd[1]: Switching root. Oct 5 02:42:07 localhost systemd-journald[281]: Journal stopped Oct 5 02:42:08 localhost systemd-journald[281]: Received SIGTERM from PID 1 (systemd). Oct 5 02:42:08 localhost kernel: audit: type=1404 audit(1759646527.263:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Oct 5 02:42:08 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 02:42:08 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 02:42:08 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 02:42:08 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 02:42:08 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 02:42:08 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 02:42:08 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 02:42:08 localhost kernel: audit: type=1403 audit(1759646527.391:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 5 02:42:08 localhost systemd[1]: Successfully loaded SELinux policy in 133.287ms. Oct 5 02:42:08 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.154ms. Oct 5 02:42:08 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 5 02:42:08 localhost systemd[1]: Detected virtualization kvm. Oct 5 02:42:08 localhost systemd[1]: Detected architecture x86-64. Oct 5 02:42:08 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 02:42:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 02:42:08 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd[1]: Stopped Switch Root. Oct 5 02:42:08 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 5 02:42:08 localhost systemd[1]: Created slice Slice /system/getty. Oct 5 02:42:08 localhost systemd[1]: Created slice Slice /system/modprobe. Oct 5 02:42:08 localhost systemd[1]: Created slice Slice /system/serial-getty. Oct 5 02:42:08 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Oct 5 02:42:08 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Oct 5 02:42:08 localhost systemd[1]: Created slice User and Session Slice. Oct 5 02:42:08 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Oct 5 02:42:08 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Oct 5 02:42:08 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Oct 5 02:42:08 localhost systemd[1]: Reached target Local Encrypted Volumes. Oct 5 02:42:08 localhost systemd[1]: Stopped target Switch Root. Oct 5 02:42:08 localhost systemd[1]: Stopped target Initrd File Systems. Oct 5 02:42:08 localhost systemd[1]: Stopped target Initrd Root File System. Oct 5 02:42:08 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Oct 5 02:42:08 localhost systemd[1]: Reached target Path Units. Oct 5 02:42:08 localhost systemd[1]: Reached target rpc_pipefs.target. Oct 5 02:42:08 localhost systemd[1]: Reached target Slice Units. Oct 5 02:42:08 localhost systemd[1]: Reached target Swaps. Oct 5 02:42:08 localhost systemd[1]: Reached target Local Verity Protected Volumes. Oct 5 02:42:08 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Oct 5 02:42:08 localhost systemd[1]: Reached target RPC Port Mapper. Oct 5 02:42:08 localhost systemd[1]: Listening on Process Core Dump Socket. Oct 5 02:42:08 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Oct 5 02:42:08 localhost systemd[1]: Listening on udev Control Socket. Oct 5 02:42:08 localhost systemd[1]: Listening on udev Kernel Socket. Oct 5 02:42:08 localhost systemd[1]: Mounting Huge Pages File System... Oct 5 02:42:08 localhost systemd[1]: Mounting POSIX Message Queue File System... Oct 5 02:42:08 localhost systemd[1]: Mounting Kernel Debug File System... Oct 5 02:42:08 localhost systemd[1]: Mounting Kernel Trace File System... Oct 5 02:42:08 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Oct 5 02:42:08 localhost systemd[1]: Starting Create List of Static Device Nodes... Oct 5 02:42:08 localhost systemd[1]: Starting Load Kernel Module configfs... Oct 5 02:42:08 localhost systemd[1]: Starting Load Kernel Module drm... Oct 5 02:42:08 localhost systemd[1]: Starting Load Kernel Module fuse... Oct 5 02:42:08 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Oct 5 02:42:08 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd[1]: Stopped File System Check on Root Device. Oct 5 02:42:08 localhost systemd[1]: Stopped Journal Service. Oct 5 02:42:08 localhost systemd[1]: Starting Journal Service... Oct 5 02:42:08 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 02:42:08 localhost systemd[1]: Starting Generate network units from Kernel command line... Oct 5 02:42:08 localhost kernel: fuse: init (API version 7.36) Oct 5 02:42:08 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Oct 5 02:42:08 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Oct 5 02:42:08 localhost systemd[1]: Starting Coldplug All udev Devices... Oct 5 02:42:08 localhost systemd[1]: Mounted Huge Pages File System. Oct 5 02:42:08 localhost systemd-journald[618]: Journal started Oct 5 02:42:08 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/19f34a97e4e878e70ef0e6e08186acc9) is 8.0M, max 314.7M, 306.7M free. Oct 5 02:42:08 localhost systemd[1]: Queued start job for default target Multi-User System. Oct 5 02:42:08 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd-modules-load[619]: Module 'msr' is built in Oct 5 02:42:08 localhost systemd[1]: Started Journal Service. Oct 5 02:42:08 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Oct 5 02:42:08 localhost systemd[1]: Mounted POSIX Message Queue File System. Oct 5 02:42:08 localhost kernel: ACPI: bus type drm_connector registered Oct 5 02:42:08 localhost systemd[1]: Mounted Kernel Debug File System. Oct 5 02:42:08 localhost systemd[1]: Mounted Kernel Trace File System. Oct 5 02:42:08 localhost systemd[1]: Finished Create List of Static Device Nodes. Oct 5 02:42:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd[1]: Finished Load Kernel Module configfs. Oct 5 02:42:08 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd[1]: Finished Load Kernel Module drm. Oct 5 02:42:08 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd[1]: Finished Load Kernel Module fuse. Oct 5 02:42:08 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Oct 5 02:42:08 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 02:42:08 localhost systemd[1]: Finished Generate network units from Kernel command line. Oct 5 02:42:08 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Oct 5 02:42:08 localhost systemd[1]: Mounting FUSE Control File System... Oct 5 02:42:08 localhost systemd[1]: Mounting Kernel Configuration File System... Oct 5 02:42:08 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Oct 5 02:42:08 localhost systemd[1]: Starting Rebuild Hardware Database... Oct 5 02:42:08 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Oct 5 02:42:08 localhost systemd[1]: Starting Load/Save Random Seed... Oct 5 02:42:08 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 02:42:08 localhost systemd[1]: Starting Create System Users... Oct 5 02:42:08 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/19f34a97e4e878e70ef0e6e08186acc9) is 8.0M, max 314.7M, 306.7M free. Oct 5 02:42:08 localhost systemd-journald[618]: Received client request to flush runtime journal. Oct 5 02:42:08 localhost systemd[1]: Finished Coldplug All udev Devices. Oct 5 02:42:08 localhost systemd[1]: Mounted FUSE Control File System. Oct 5 02:42:08 localhost systemd[1]: Mounted Kernel Configuration File System. Oct 5 02:42:08 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Oct 5 02:42:08 localhost systemd[1]: Finished Load/Save Random Seed. Oct 5 02:42:08 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 02:42:08 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Oct 5 02:42:08 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Oct 5 02:42:08 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Oct 5 02:42:08 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Oct 5 02:42:08 localhost systemd[1]: Finished Create System Users. Oct 5 02:42:08 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Oct 5 02:42:08 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Oct 5 02:42:08 localhost systemd[1]: Reached target Preparation for Local File Systems. Oct 5 02:42:08 localhost systemd[1]: Set up automount EFI System Partition Automount. Oct 5 02:42:08 localhost systemd[1]: Finished Rebuild Hardware Database. Oct 5 02:42:08 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 5 02:42:08 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'. Oct 5 02:42:08 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 5 02:42:08 localhost systemd[1]: Starting Load Kernel Module configfs... Oct 5 02:42:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 5 02:42:08 localhost systemd[1]: Finished Load Kernel Module configfs. Oct 5 02:42:08 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Oct 5 02:42:08 localhost systemd-udevd[650]: Network interface NamePolicy= disabled on kernel command line. Oct 5 02:42:08 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Oct 5 02:42:08 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Oct 5 02:42:08 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 5 02:42:08 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Oct 5 02:42:08 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Oct 5 02:42:09 localhost systemd-fsck[681]: fsck.fat 4.2 (2021-01-31) Oct 5 02:42:09 localhost systemd-fsck[681]: /dev/vda2: 12 files, 1782/51145 clusters Oct 5 02:42:09 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Oct 5 02:42:09 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 5 02:42:09 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 5 02:42:09 localhost kernel: Console: switching to colour dummy device 80x25 Oct 5 02:42:09 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 5 02:42:09 localhost kernel: [drm] features: -context_init Oct 5 02:42:09 localhost kernel: [drm] number of scanouts: 1 Oct 5 02:42:09 localhost kernel: [drm] number of cap sets: 0 Oct 5 02:42:09 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Oct 5 02:42:09 localhost kernel: SVM: TSC scaling supported Oct 5 02:42:09 localhost kernel: kvm: Nested Virtualization enabled Oct 5 02:42:09 localhost kernel: SVM: kvm: Nested Paging enabled Oct 5 02:42:09 localhost kernel: SVM: LBR virtualization supported Oct 5 02:42:09 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Oct 5 02:42:09 localhost kernel: Console: switching to colour frame buffer device 128x48 Oct 5 02:42:09 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 5 02:42:09 localhost systemd[1]: Mounting /boot... Oct 5 02:42:09 localhost kernel: XFS (vda3): Mounting V5 Filesystem Oct 5 02:42:09 localhost kernel: XFS (vda3): Ending clean mount Oct 5 02:42:09 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Oct 5 02:42:09 localhost systemd[1]: Mounted /boot. Oct 5 02:42:09 localhost systemd[1]: Mounting /boot/efi... Oct 5 02:42:09 localhost systemd[1]: Mounted /boot/efi. Oct 5 02:42:09 localhost systemd[1]: Reached target Local File Systems. Oct 5 02:42:09 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Oct 5 02:42:09 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Oct 5 02:42:09 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 5 02:42:09 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 5 02:42:09 localhost systemd[1]: Starting Automatic Boot Loader Update... Oct 5 02:42:09 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Oct 5 02:42:09 localhost systemd[1]: Starting Create Volatile Files and Directories... Oct 5 02:42:09 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 708 (bootctl) Oct 5 02:42:09 localhost systemd[1]: Starting File System Check on /dev/vda2... Oct 5 02:42:09 localhost systemd[1]: Finished File System Check on /dev/vda2. Oct 5 02:42:09 localhost systemd[1]: Mounting EFI System Partition Automount... Oct 5 02:42:09 localhost systemd[1]: Mounted EFI System Partition Automount. Oct 5 02:42:09 localhost systemd[1]: Finished Automatic Boot Loader Update. Oct 5 02:42:09 localhost systemd[1]: Finished Create Volatile Files and Directories. Oct 5 02:42:09 localhost systemd[1]: Starting Security Auditing Service... Oct 5 02:42:09 localhost systemd[1]: Starting RPC Bind... Oct 5 02:42:09 localhost systemd[1]: Starting Rebuild Journal Catalog... Oct 5 02:42:09 localhost auditd[725]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Oct 5 02:42:09 localhost auditd[725]: Init complete, auditd 3.0.7 listening for events (startup state enable) Oct 5 02:42:09 localhost systemd[1]: Finished Rebuild Journal Catalog. Oct 5 02:42:09 localhost systemd[1]: Started RPC Bind. Oct 5 02:42:09 localhost augenrules[730]: /sbin/augenrules: No change Oct 5 02:42:09 localhost augenrules[740]: No rules Oct 5 02:42:09 localhost augenrules[740]: enabled 1 Oct 5 02:42:09 localhost augenrules[740]: failure 1 Oct 5 02:42:09 localhost augenrules[740]: pid 725 Oct 5 02:42:09 localhost augenrules[740]: rate_limit 0 Oct 5 02:42:09 localhost augenrules[740]: backlog_limit 8192 Oct 5 02:42:09 localhost augenrules[740]: lost 0 Oct 5 02:42:09 localhost augenrules[740]: backlog 3 Oct 5 02:42:09 localhost augenrules[740]: backlog_wait_time 60000 Oct 5 02:42:09 localhost augenrules[740]: backlog_wait_time_actual 0 Oct 5 02:42:09 localhost augenrules[740]: enabled 1 Oct 5 02:42:09 localhost augenrules[740]: failure 1 Oct 5 02:42:09 localhost augenrules[740]: pid 725 Oct 5 02:42:09 localhost augenrules[740]: rate_limit 0 Oct 5 02:42:09 localhost augenrules[740]: backlog_limit 8192 Oct 5 02:42:09 localhost augenrules[740]: lost 0 Oct 5 02:42:09 localhost augenrules[740]: backlog 0 Oct 5 02:42:09 localhost augenrules[740]: backlog_wait_time 60000 Oct 5 02:42:09 localhost augenrules[740]: backlog_wait_time_actual 0 Oct 5 02:42:09 localhost augenrules[740]: enabled 1 Oct 5 02:42:09 localhost augenrules[740]: failure 1 Oct 5 02:42:09 localhost augenrules[740]: pid 725 Oct 5 02:42:09 localhost augenrules[740]: rate_limit 0 Oct 5 02:42:09 localhost augenrules[740]: backlog_limit 8192 Oct 5 02:42:09 localhost augenrules[740]: lost 0 Oct 5 02:42:09 localhost augenrules[740]: backlog 1 Oct 5 02:42:09 localhost augenrules[740]: backlog_wait_time 60000 Oct 5 02:42:09 localhost augenrules[740]: backlog_wait_time_actual 0 Oct 5 02:42:09 localhost systemd[1]: Started Security Auditing Service. Oct 5 02:42:09 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Oct 5 02:42:09 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Oct 5 02:42:09 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Oct 5 02:42:09 localhost systemd[1]: Starting Update is Completed... Oct 5 02:42:09 localhost systemd[1]: Finished Update is Completed. Oct 5 02:42:09 localhost systemd[1]: Reached target System Initialization. Oct 5 02:42:09 localhost systemd[1]: Started dnf makecache --timer. Oct 5 02:42:09 localhost systemd[1]: Started Daily rotation of log files. Oct 5 02:42:09 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Oct 5 02:42:09 localhost systemd[1]: Reached target Timer Units. Oct 5 02:42:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Oct 5 02:42:09 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Oct 5 02:42:09 localhost systemd[1]: Reached target Socket Units. Oct 5 02:42:09 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Oct 5 02:42:09 localhost systemd[1]: Starting D-Bus System Message Bus... Oct 5 02:42:09 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 5 02:42:09 localhost systemd[1]: Started D-Bus System Message Bus. Oct 5 02:42:09 localhost systemd[1]: Reached target Basic System. Oct 5 02:42:09 localhost systemd[1]: Starting NTP client/server... Oct 5 02:42:09 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Oct 5 02:42:09 localhost journal[751]: Ready Oct 5 02:42:09 localhost systemd[1]: Started irqbalance daemon. Oct 5 02:42:09 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Oct 5 02:42:09 localhost systemd[1]: Starting System Logging Service... Oct 5 02:42:09 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 02:42:09 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 02:42:09 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 02:42:09 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 02:42:09 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Oct 5 02:42:09 localhost systemd[1]: Reached target User and Group Name Lookups. Oct 5 02:42:09 localhost systemd[1]: Starting User Login Management... Oct 5 02:42:09 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Oct 5 02:42:09 localhost systemd[1]: Started System Logging Service. Oct 5 02:42:09 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start Oct 5 02:42:09 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Oct 5 02:42:09 localhost systemd-logind[760]: New seat seat0. Oct 5 02:42:09 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Oct 5 02:42:09 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 5 02:42:09 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 02:42:09 localhost systemd[1]: Started User Login Management. Oct 5 02:42:09 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data Oct 5 02:42:09 localhost chronyd[766]: Loaded seccomp filter (level 2) Oct 5 02:42:09 localhost systemd[1]: Started NTP client/server. Oct 5 02:42:09 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 02:42:10 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sun, 05 Oct 2025 06:42:10 +0000. Up 6.54 seconds. Oct 5 02:42:10 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpbn3frcwa.mount: Deactivated successfully. Oct 5 02:42:10 localhost systemd[1]: Starting Hostname Service... Oct 5 02:42:10 localhost systemd[1]: Started Hostname Service. Oct 5 02:42:10 localhost systemd-hostnamed[784]: Hostname set to (static) Oct 5 02:42:10 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Oct 5 02:42:10 localhost systemd[1]: Reached target Preparation for Network. Oct 5 02:42:10 localhost systemd[1]: Starting Network Manager... Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0034] NetworkManager (version 1.42.2-1.el9) is starting... (boot:080e1fca-8a85-4a4d-ba02-f4906acb2264) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0040] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0074] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Oct 5 02:42:11 localhost systemd[1]: Started Network Manager. Oct 5 02:42:11 localhost systemd[1]: Reached target Network. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0156] manager[0x55c0ee784020]: monitoring kernel firmware directory '/lib/firmware'. Oct 5 02:42:11 localhost systemd[1]: Starting Network Manager Wait Online... Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0194] hostname: hostname: using hostnamed Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0194] hostname: static hostname changed from (none) to "np0005471152.novalocal" Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0205] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Oct 5 02:42:11 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Oct 5 02:42:11 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Oct 5 02:42:11 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0373] manager[0x55c0ee784020]: rfkill: Wi-Fi hardware radio set enabled Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0374] manager[0x55c0ee784020]: rfkill: WWAN hardware radio set enabled Oct 5 02:42:11 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Oct 5 02:42:11 localhost systemd[1]: Started GSSAPI Proxy Daemon. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0489] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0493] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0515] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0516] manager: Networking is enabled by state file Oct 5 02:42:11 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Oct 5 02:42:11 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Oct 5 02:42:11 localhost systemd[1]: Reached target NFS client services. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0581] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0582] settings: Loaded settings plugin: keyfile (internal) Oct 5 02:42:11 localhost systemd[1]: Reached target Preparation for Remote File Systems. Oct 5 02:42:11 localhost systemd[1]: Reached target Remote File Systems. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0631] dhcp: init: Using DHCP client 'internal' Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0635] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Oct 5 02:42:11 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0657] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0667] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0684] device (lo): Activation: starting connection 'lo' (99ab9a05-b88d-43bf-8a25-cd5134677bce) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0702] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0709] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Oct 5 02:42:11 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0748] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0751] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0752] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0754] device (eth0): carrier: link connected Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0756] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0760] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0769] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0775] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0776] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0780] manager: NetworkManager state is now CONNECTING Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0782] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0792] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0795] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:42:11 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0905] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0908] dhcp4 (eth0): state changed new lease, address=38.102.83.53 Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0911] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0916] device (lo): Activation: successful, device activated. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0923] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0951] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0976] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0979] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0982] manager: NetworkManager state is now CONNECTED_SITE Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0985] device (eth0): Activation: successful, device activated. Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0990] manager: NetworkManager state is now CONNECTED_GLOBAL Oct 5 02:42:11 localhost NetworkManager[789]: [1759646531.0993] manager: startup complete Oct 5 02:42:11 localhost systemd[1]: Finished Network Manager Wait Online. Oct 5 02:42:11 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Oct 5 02:42:11 localhost cloud-init[905]: Cloud-init v. 22.1-9.el9 running 'init' at Sun, 05 Oct 2025 06:42:11 +0000. Up 7.46 seconds. Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | eth0 | True | 38.102.83.53 | 255.255.255.0 | global | fa:16:3e:3e:99:36 | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | eth0 | True | fe80::f816:3eff:fe3e:9936/64 | . | link | fa:16:3e:3e:99:36 | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | lo | True | ::1/128 | . | host | . | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | Route | Destination | Gateway | Interface | Flags | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: | 3 | multicast | :: | eth0 | U | Oct 5 02:42:11 localhost cloud-init[905]: ci-info: +-------+-------------+---------+-----------+-------+ Oct 5 02:42:11 localhost systemd[1]: Starting Authorization Manager... Oct 5 02:42:11 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 02:42:11 localhost polkitd[1036]: Started polkitd version 0.117 Oct 5 02:42:11 localhost systemd[1]: Started Authorization Manager. Oct 5 02:42:14 localhost cloud-init[905]: Generating public/private rsa key pair. Oct 5 02:42:14 localhost cloud-init[905]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Oct 5 02:42:14 localhost cloud-init[905]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Oct 5 02:42:14 localhost cloud-init[905]: The key fingerprint is: Oct 5 02:42:14 localhost cloud-init[905]: SHA256:5zq5xbofERIOt5pLxcU8WRxGlxyKXY2v3vgBd6DKoQ8 root@np0005471152.novalocal Oct 5 02:42:14 localhost cloud-init[905]: The key's randomart image is: Oct 5 02:42:14 localhost cloud-init[905]: +---[RSA 3072]----+ Oct 5 02:42:14 localhost cloud-init[905]: | . oo.==o+= | Oct 5 02:42:14 localhost cloud-init[905]: | = +=+.++ .| Oct 5 02:42:14 localhost cloud-init[905]: | * o.o .. | Oct 5 02:42:14 localhost cloud-init[905]: | + . . . ..| Oct 5 02:42:14 localhost cloud-init[905]: | + S + .. o.| Oct 5 02:42:14 localhost cloud-init[905]: | . . * + + .| Oct 5 02:42:14 localhost cloud-init[905]: | . E.B . + | Oct 5 02:42:14 localhost cloud-init[905]: | o* . o o| Oct 5 02:42:14 localhost cloud-init[905]: | ==o ..| Oct 5 02:42:14 localhost cloud-init[905]: +----[SHA256]-----+ Oct 5 02:42:14 localhost cloud-init[905]: Generating public/private ecdsa key pair. Oct 5 02:42:14 localhost cloud-init[905]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Oct 5 02:42:14 localhost cloud-init[905]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Oct 5 02:42:14 localhost cloud-init[905]: The key fingerprint is: Oct 5 02:42:14 localhost cloud-init[905]: SHA256:qh2zwdVD3uG/F5+cJ+L9+meyDNio+GWcRcFHnK3ipD0 root@np0005471152.novalocal Oct 5 02:42:14 localhost cloud-init[905]: The key's randomart image is: Oct 5 02:42:14 localhost cloud-init[905]: +---[ECDSA 256]---+ Oct 5 02:42:14 localhost cloud-init[905]: | ..o.o | Oct 5 02:42:14 localhost cloud-init[905]: | ..+ .| Oct 5 02:42:14 localhost cloud-init[905]: | . o. . | Oct 5 02:42:14 localhost cloud-init[905]: | + +o.. | Oct 5 02:42:14 localhost cloud-init[905]: | S +=+. | Oct 5 02:42:14 localhost cloud-init[905]: | . o ..BE. . | Oct 5 02:42:14 localhost cloud-init[905]: | * B o.o =| Oct 5 02:42:14 localhost cloud-init[905]: | o * + .+o=*| Oct 5 02:42:14 localhost cloud-init[905]: | . +.o ...*X=| Oct 5 02:42:14 localhost cloud-init[905]: +----[SHA256]-----+ Oct 5 02:42:14 localhost cloud-init[905]: Generating public/private ed25519 key pair. Oct 5 02:42:14 localhost cloud-init[905]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Oct 5 02:42:14 localhost cloud-init[905]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Oct 5 02:42:14 localhost cloud-init[905]: The key fingerprint is: Oct 5 02:42:14 localhost cloud-init[905]: SHA256:yMotoiKuS7skfe+4TkWD4suuyUL3oVgp6kSvjBWlNzA root@np0005471152.novalocal Oct 5 02:42:14 localhost cloud-init[905]: The key's randomart image is: Oct 5 02:42:14 localhost cloud-init[905]: +--[ED25519 256]--+ Oct 5 02:42:14 localhost cloud-init[905]: | | Oct 5 02:42:14 localhost cloud-init[905]: | . | Oct 5 02:42:14 localhost cloud-init[905]: | E o o | Oct 5 02:42:14 localhost cloud-init[905]: | . * o o | Oct 5 02:42:14 localhost cloud-init[905]: | .+ + + S | Oct 5 02:42:14 localhost cloud-init[905]: |.=.O * | Oct 5 02:42:14 localhost cloud-init[905]: |++@.O o | Oct 5 02:42:14 localhost cloud-init[905]: |&*o= = | Oct 5 02:42:14 localhost cloud-init[905]: |#@o.+oo | Oct 5 02:42:14 localhost cloud-init[905]: +----[SHA256]-----+ Oct 5 02:42:14 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Oct 5 02:42:14 localhost systemd[1]: Reached target Cloud-config availability. Oct 5 02:42:14 localhost systemd[1]: Reached target Network is Online. Oct 5 02:42:14 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Oct 5 02:42:14 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Oct 5 02:42:14 localhost systemd[1]: Starting Crash recovery kernel arming... Oct 5 02:42:14 localhost systemd[1]: Starting Notify NFS peers of a restart... Oct 5 02:42:14 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 02:42:14 localhost systemd[1]: Starting Permit User Sessions... Oct 5 02:42:14 localhost sm-notify[1132]: Version 2.5.4 starting Oct 5 02:42:14 localhost systemd[1]: Started Notify NFS peers of a restart. Oct 5 02:42:14 localhost systemd[1]: Finished Permit User Sessions. Oct 5 02:42:14 localhost systemd[1]: Started Command Scheduler. Oct 5 02:42:14 localhost systemd[1]: Started Getty on tty1. Oct 5 02:42:14 localhost systemd[1]: Started Serial Getty on ttyS0. Oct 5 02:42:14 localhost sshd[1133]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:14 localhost systemd[1]: Reached target Login Prompts. Oct 5 02:42:14 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 02:42:15 localhost systemd[1]: Reached target Multi-User System. Oct 5 02:42:15 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Oct 5 02:42:15 localhost sshd[1142]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 5 02:42:15 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Oct 5 02:42:15 localhost sshd[1156]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost sshd[1170]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost sshd[1182]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost kdumpctl[1139]: kdump: No kdump initial ramdisk found. Oct 5 02:42:15 localhost kdumpctl[1139]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Oct 5 02:42:15 localhost sshd[1193]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost sshd[1199]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost sshd[1212]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost sshd[1231]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost sshd[1243]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:42:15 localhost cloud-init[1314]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sun, 05 Oct 2025 06:42:15 +0000. Up 11.34 seconds. Oct 5 02:42:15 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Oct 5 02:42:15 localhost systemd[1]: Starting Execute cloud user/final scripts... Oct 5 02:42:15 localhost dracut[1435]: dracut-057-21.git20230214.el9 Oct 5 02:42:15 localhost cloud-init[1453]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sun, 05 Oct 2025 06:42:15 +0000. Up 11.71 seconds. Oct 5 02:42:15 localhost dracut[1437]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Oct 5 02:42:15 localhost cloud-init[1492]: ############################################################# Oct 5 02:42:15 localhost cloud-init[1499]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Oct 5 02:42:15 localhost cloud-init[1506]: 256 SHA256:qh2zwdVD3uG/F5+cJ+L9+meyDNio+GWcRcFHnK3ipD0 root@np0005471152.novalocal (ECDSA) Oct 5 02:42:15 localhost cloud-init[1513]: 256 SHA256:yMotoiKuS7skfe+4TkWD4suuyUL3oVgp6kSvjBWlNzA root@np0005471152.novalocal (ED25519) Oct 5 02:42:15 localhost cloud-init[1523]: 3072 SHA256:5zq5xbofERIOt5pLxcU8WRxGlxyKXY2v3vgBd6DKoQ8 root@np0005471152.novalocal (RSA) Oct 5 02:42:15 localhost cloud-init[1527]: -----END SSH HOST KEY FINGERPRINTS----- Oct 5 02:42:15 localhost cloud-init[1531]: ############################################################# Oct 5 02:42:15 localhost cloud-init[1453]: Cloud-init v. 22.1-9.el9 finished at Sun, 05 Oct 2025 06:42:15 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 11.97 seconds Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Oct 5 02:42:15 localhost systemd[1]: Reloading Network Manager... Oct 5 02:42:15 localhost dracut[1437]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Oct 5 02:42:15 localhost dracut[1437]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Oct 5 02:42:15 localhost NetworkManager[789]: [1759646535.9804] audit: op="reload" arg="0" pid=1602 uid=0 result="success" Oct 5 02:42:15 localhost NetworkManager[789]: [1759646535.9813] config: signal: SIGHUP (no changes from disk) Oct 5 02:42:15 localhost dracut[1437]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Oct 5 02:42:15 localhost systemd[1]: Reloaded Network Manager. Oct 5 02:42:15 localhost systemd[1]: Finished Execute cloud user/final scripts. Oct 5 02:42:15 localhost systemd[1]: Reached target Cloud-init target. Oct 5 02:42:15 localhost chronyd[766]: Selected source 149.56.19.163 (2.rhel.pool.ntp.org) Oct 5 02:42:15 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Oct 5 02:42:16 localhost chronyd[766]: System clock TAI offset set to 37 seconds Oct 5 02:42:16 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Oct 5 02:42:16 localhost dracut[1437]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Oct 5 02:42:16 localhost dracut[1437]: memstrack is not available Oct 5 02:42:16 localhost dracut[1437]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Oct 5 02:42:16 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Oct 5 02:42:16 localhost dracut[1437]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Oct 5 02:42:16 localhost dracut[1437]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Oct 5 02:42:16 localhost dracut[1437]: memstrack is not available Oct 5 02:42:16 localhost dracut[1437]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Oct 5 02:42:17 localhost dracut[1437]: *** Including module: systemd *** Oct 5 02:42:17 localhost dracut[1437]: *** Including module: systemd-initrd *** Oct 5 02:42:17 localhost dracut[1437]: *** Including module: i18n *** Oct 5 02:42:17 localhost dracut[1437]: No KEYMAP configured. Oct 5 02:42:17 localhost dracut[1437]: *** Including module: drm *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: prefixdevname *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: kernel-modules *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: kernel-modules-extra *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: qemu *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: fstab-sys *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: rootfs-block *** Oct 5 02:42:18 localhost dracut[1437]: *** Including module: terminfo *** Oct 5 02:42:19 localhost dracut[1437]: *** Including module: udev-rules *** Oct 5 02:42:19 localhost dracut[1437]: Skipping udev rule: 91-permissions.rules Oct 5 02:42:19 localhost dracut[1437]: Skipping udev rule: 80-drivers-modprobe.rules Oct 5 02:42:19 localhost dracut[1437]: *** Including module: virtiofs *** Oct 5 02:42:19 localhost dracut[1437]: *** Including module: dracut-systemd *** Oct 5 02:42:19 localhost dracut[1437]: *** Including module: usrmount *** Oct 5 02:42:19 localhost dracut[1437]: *** Including module: base *** Oct 5 02:42:20 localhost dracut[1437]: *** Including module: fs-lib *** Oct 5 02:42:20 localhost dracut[1437]: *** Including module: kdumpbase *** Oct 5 02:42:20 localhost dracut[1437]: *** Including module: microcode_ctl-fw_dir_override *** Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl module: mangling fw_dir Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-2d-07" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-4e-03" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-4f-01" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-55-04" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-5e-03" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-8c-01" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Oct 5 02:42:20 localhost dracut[1437]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Oct 5 02:42:20 localhost dracut[1437]: *** Including module: shutdown *** Oct 5 02:42:20 localhost dracut[1437]: *** Including module: squash *** Oct 5 02:42:20 localhost dracut[1437]: *** Including modules done *** Oct 5 02:42:20 localhost dracut[1437]: *** Installing kernel module dependencies *** Oct 5 02:42:21 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 02:42:21 localhost dracut[1437]: *** Installing kernel module dependencies done *** Oct 5 02:42:21 localhost dracut[1437]: *** Resolving executable dependencies *** Oct 5 02:42:23 localhost dracut[1437]: *** Resolving executable dependencies done *** Oct 5 02:42:23 localhost dracut[1437]: *** Hardlinking files *** Oct 5 02:42:23 localhost dracut[1437]: Mode: real Oct 5 02:42:23 localhost dracut[1437]: Files: 1099 Oct 5 02:42:23 localhost dracut[1437]: Linked: 3 files Oct 5 02:42:23 localhost dracut[1437]: Compared: 0 xattrs Oct 5 02:42:23 localhost dracut[1437]: Compared: 373 files Oct 5 02:42:23 localhost dracut[1437]: Saved: 61.04 KiB Oct 5 02:42:23 localhost dracut[1437]: Duration: 0.045975 seconds Oct 5 02:42:23 localhost dracut[1437]: *** Hardlinking files done *** Oct 5 02:42:23 localhost dracut[1437]: Could not find 'strip'. Not stripping the initramfs. Oct 5 02:42:23 localhost dracut[1437]: *** Generating early-microcode cpio image *** Oct 5 02:42:23 localhost dracut[1437]: *** Constructing AuthenticAMD.bin *** Oct 5 02:42:23 localhost dracut[1437]: *** Store current command line parameters *** Oct 5 02:42:23 localhost dracut[1437]: Stored kernel commandline: Oct 5 02:42:23 localhost dracut[1437]: No dracut internal kernel commandline stored in the initramfs Oct 5 02:42:23 localhost dracut[1437]: *** Install squash loader *** Oct 5 02:42:24 localhost dracut[1437]: *** Squashing the files inside the initramfs *** Oct 5 02:42:25 localhost dracut[1437]: *** Squashing the files inside the initramfs done *** Oct 5 02:42:25 localhost dracut[1437]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Oct 5 02:42:25 localhost dracut[1437]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Oct 5 02:42:25 localhost kdumpctl[1139]: kdump: kexec: loaded kdump kernel Oct 5 02:42:25 localhost kdumpctl[1139]: kdump: Starting kdump: [OK] Oct 5 02:42:25 localhost systemd[1]: Finished Crash recovery kernel arming. Oct 5 02:42:25 localhost systemd[1]: Startup finished in 1.222s (kernel) + 2.164s (initrd) + 18.509s (userspace) = 21.896s. Oct 5 02:42:41 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 02:44:02 localhost sshd[4178]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:44:03 localhost systemd[1]: Created slice User Slice of UID 1000. Oct 5 02:44:03 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Oct 5 02:44:03 localhost systemd-logind[760]: New session 1 of user zuul. Oct 5 02:44:03 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Oct 5 02:44:03 localhost systemd[1]: Starting User Manager for UID 1000... Oct 5 02:44:03 localhost systemd[4182]: Queued start job for default target Main User Target. Oct 5 02:44:03 localhost systemd[4182]: Created slice User Application Slice. Oct 5 02:44:03 localhost systemd[4182]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 02:44:03 localhost systemd[4182]: Started Daily Cleanup of User's Temporary Directories. Oct 5 02:44:03 localhost systemd[4182]: Reached target Paths. Oct 5 02:44:03 localhost systemd[4182]: Reached target Timers. Oct 5 02:44:03 localhost systemd[4182]: Starting D-Bus User Message Bus Socket... Oct 5 02:44:03 localhost systemd[4182]: Starting Create User's Volatile Files and Directories... Oct 5 02:44:03 localhost systemd[4182]: Finished Create User's Volatile Files and Directories. Oct 5 02:44:03 localhost systemd[4182]: Listening on D-Bus User Message Bus Socket. Oct 5 02:44:03 localhost systemd[4182]: Reached target Sockets. Oct 5 02:44:03 localhost systemd[4182]: Reached target Basic System. Oct 5 02:44:03 localhost systemd[4182]: Reached target Main User Target. Oct 5 02:44:03 localhost systemd[4182]: Startup finished in 117ms. Oct 5 02:44:03 localhost systemd[1]: Started User Manager for UID 1000. Oct 5 02:44:03 localhost systemd[1]: Started Session 1 of User zuul. Oct 5 02:44:03 localhost python3[4234]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:14 localhost python3[4253]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:21 localhost python3[4306]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:22 localhost python3[4336]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Oct 5 02:44:26 localhost python3[4352]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:26 localhost python3[4366]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:28 localhost python3[4425]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:44:28 localhost python3[4466]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759646667.7214725-395-229619761260321/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa follow=False checksum=cf09eb456a314382f639138519dc421f9df58c1f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:29 localhost python3[4539]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:44:30 localhost python3[4580]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759646669.5052083-495-40085275116452/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa.pub follow=False checksum=eb73baa214aed5877413178ed76ec0f476520beb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:31 localhost python3[4608]: ansible-ping Invoked with data=pong Oct 5 02:44:34 localhost python3[4622]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 02:44:38 localhost python3[4675]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Oct 5 02:44:41 localhost python3[4697]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:41 localhost python3[4711]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:41 localhost python3[4725]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:43 localhost python3[4739]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:43 localhost python3[4753]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:43 localhost python3[4767]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:46 localhost python3[4783]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:47 localhost python3[4831]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:44:48 localhost python3[4874]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759646687.712763-106-65805343532965/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:44:56 localhost python3[4903]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4917]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4931]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:56 localhost python3[4945]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[4959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[4973]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:57 localhost python3[4987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5001]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5029]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:58 localhost python3[5043]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5057]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5071]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5085]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:44:59 localhost python3[5099]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5113]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5127]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5141]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:00 localhost python3[5155]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5169]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5183]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5197]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:01 localhost python3[5211]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:02 localhost python3[5225]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:02 localhost python3[5239]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:02 localhost python3[5253]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 02:45:03 localhost python3[5269]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Oct 5 02:45:03 localhost systemd[1]: Starting Time & Date Service... Oct 5 02:45:03 localhost systemd[1]: Started Time & Date Service. Oct 5 02:45:03 localhost systemd-timedated[5271]: Changed time zone to 'UTC' (UTC). Oct 5 02:45:04 localhost python3[5290]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:05 localhost python3[5336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:06 localhost python3[5377]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759646705.6780963-501-181435179614998/source _original_basename=tmppd439lax follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:07 localhost python3[5437]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:07 localhost python3[5478]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759646707.217612-591-43294927312282/source _original_basename=tmpp3o39ry6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:09 localhost python3[5540]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:09 localhost python3[5583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759646709.298217-734-255159057571126/source _original_basename=tmp4qp8ghft follow=False checksum=1cc2ea2b76967ada2d4710a35e138c3751da2100 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:11 localhost python3[5611]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:45:11 localhost python3[5627]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:45:12 localhost python3[5677]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:45:12 localhost python3[5720]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759646712.3752623-858-167703504477134/source _original_basename=tmpar1ah4ko follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:24 localhost python3[5751]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-4bc2-3529-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:45:33 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 02:45:35 localhost python3[5772]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4bc2-3529-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Oct 5 02:45:37 localhost python3[5790]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:45:56 localhost python3[5806]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:46:41 localhost systemd[4182]: Starting Mark boot as successful... Oct 5 02:46:41 localhost systemd[4182]: Finished Mark boot as successful. Oct 5 02:46:56 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Oct 5 02:47:31 localhost systemd[1]: Unmounting EFI System Partition Automount... Oct 5 02:47:31 localhost systemd[1]: efi.mount: Deactivated successfully. Oct 5 02:47:31 localhost systemd[1]: Unmounted EFI System Partition Automount. Oct 5 02:49:41 localhost systemd[4182]: Created slice User Background Tasks Slice. Oct 5 02:49:41 localhost systemd[4182]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 02:49:41 localhost systemd[4182]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Oct 5 02:49:53 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Oct 5 02:49:53 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9008] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Oct 5 02:49:53 localhost systemd-udevd[5815]: Network interface NamePolicy= disabled on kernel command line. Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9137] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9166] settings: (eth1): created default wired connection 'Wired connection 1' Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9170] device (eth1): carrier: link connected Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9173] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9179] policy: auto-activating connection 'Wired connection 1' (0b21ae61-9328-3226-9be0-01e1f93d6a0b) Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9185] device (eth1): Activation: starting connection 'Wired connection 1' (0b21ae61-9328-3226-9be0-01e1f93d6a0b) Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9186] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9189] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9194] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Oct 5 02:49:53 localhost NetworkManager[789]: [1759646993.9198] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:49:54 localhost sshd[5818]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:49:54 localhost systemd-logind[760]: New session 3 of user zuul. Oct 5 02:49:54 localhost systemd[1]: Started Session 3 of User zuul. Oct 5 02:49:54 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Oct 5 02:49:55 localhost python3[5835]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-18e8-600c-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:50:08 localhost python3[5885]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:50:08 localhost python3[5928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759647008.1041257-537-79887767740499/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=826931bd9b3d5d0a148aa8018cb643206e78d9f6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:50:09 localhost python3[5958]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 02:50:10 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Oct 5 02:50:10 localhost systemd[1]: Stopped Network Manager Wait Online. Oct 5 02:50:10 localhost systemd[1]: Stopping Network Manager Wait Online... Oct 5 02:50:10 localhost systemd[1]: Stopping Network Manager... Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3615] caught SIGTERM, shutting down normally. Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3769] dhcp4 (eth0): canceled DHCP transaction Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3769] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3769] dhcp4 (eth0): state changed no lease Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3773] manager: NetworkManager state is now CONNECTING Oct 5 02:50:10 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3879] dhcp4 (eth1): canceled DHCP transaction Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3880] dhcp4 (eth1): state changed no lease Oct 5 02:50:10 localhost NetworkManager[789]: [1759647010.3942] exiting (success) Oct 5 02:50:10 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 02:50:10 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Oct 5 02:50:10 localhost systemd[1]: Stopped Network Manager. Oct 5 02:50:10 localhost systemd[1]: NetworkManager.service: Consumed 2.072s CPU time. Oct 5 02:50:10 localhost systemd[1]: Starting Network Manager... Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.4413] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:080e1fca-8a85-4a4d-ba02-f4906acb2264) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.4416] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.4440] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Oct 5 02:50:10 localhost systemd[1]: Started Network Manager. Oct 5 02:50:10 localhost systemd[1]: Starting Network Manager Wait Online... Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.4487] manager[0x5581ad4fa090]: monitoring kernel firmware directory '/lib/firmware'. Oct 5 02:50:10 localhost systemd[1]: Starting Hostname Service... Oct 5 02:50:10 localhost systemd[1]: Started Hostname Service. Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5268] hostname: hostname: using hostnamed Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5269] hostname: static hostname changed from (none) to "np0005471152.novalocal" Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5274] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5280] manager[0x5581ad4fa090]: rfkill: Wi-Fi hardware radio set enabled Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5280] manager[0x5581ad4fa090]: rfkill: WWAN hardware radio set enabled Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5315] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5315] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5316] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5317] manager: Networking is enabled by state file Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5324] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5325] settings: Loaded settings plugin: keyfile (internal) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5365] dhcp: init: Using DHCP client 'internal' Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5369] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5376] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5383] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5393] device (lo): Activation: starting connection 'lo' (99ab9a05-b88d-43bf-8a25-cd5134677bce) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5401] device (eth0): carrier: link connected Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5407] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5414] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5415] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5423] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5432] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5439] device (eth1): carrier: link connected Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5445] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5452] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0b21ae61-9328-3226-9be0-01e1f93d6a0b) (indicated) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5453] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5460] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5470] device (eth1): Activation: starting connection 'Wired connection 1' (0b21ae61-9328-3226-9be0-01e1f93d6a0b) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5497] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5501] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5503] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5506] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5511] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5514] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5517] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5521] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5528] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5532] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5542] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5545] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5594] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5600] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5605] device (lo): Activation: successful, device activated. Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5613] dhcp4 (eth0): state changed new lease, address=38.102.83.53 Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5620] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5703] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5741] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5743] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5747] manager: NetworkManager state is now CONNECTED_SITE Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5750] device (eth0): Activation: successful, device activated. Oct 5 02:50:10 localhost NetworkManager[5970]: [1759647010.5756] manager: NetworkManager state is now CONNECTED_GLOBAL Oct 5 02:50:10 localhost python3[6033]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-18e8-600c-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:50:20 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 02:50:40 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 02:50:55 localhost NetworkManager[5970]: [1759647055.8621] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:55 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 02:50:55 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 02:50:55 localhost NetworkManager[5970]: [1759647055.8820] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:55 localhost NetworkManager[5970]: [1759647055.8823] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Oct 5 02:50:55 localhost NetworkManager[5970]: [1759647055.8829] device (eth1): Activation: successful, device activated. Oct 5 02:50:55 localhost NetworkManager[5970]: [1759647055.8837] manager: startup complete Oct 5 02:50:55 localhost systemd[1]: Finished Network Manager Wait Online. Oct 5 02:51:05 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 02:51:10 localhost systemd[1]: session-3.scope: Deactivated successfully. Oct 5 02:51:10 localhost systemd[1]: session-3.scope: Consumed 1.430s CPU time. Oct 5 02:51:10 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Oct 5 02:51:10 localhost systemd-logind[760]: Removed session 3. Oct 5 02:51:28 localhost sshd[6058]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:51:28 localhost systemd-logind[760]: New session 4 of user zuul. Oct 5 02:51:28 localhost systemd[1]: Started Session 4 of User zuul. Oct 5 02:51:28 localhost python3[6109]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 02:51:28 localhost python3[6152]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759647088.2379158-628-61818939287754/source _original_basename=tmp4ot2u902 follow=False checksum=0e431f410d06a0c53c5dd3a865f2812ab817026e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:51:31 localhost systemd[1]: session-4.scope: Deactivated successfully. Oct 5 02:51:31 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Oct 5 02:51:31 localhost systemd-logind[760]: Removed session 4. Oct 5 02:57:40 localhost sshd[6169]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:57:40 localhost systemd[1]: Starting Cleanup of Temporary Directories... Oct 5 02:57:40 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Oct 5 02:57:40 localhost systemd[1]: Finished Cleanup of Temporary Directories. Oct 5 02:57:40 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Oct 5 02:57:40 localhost systemd-logind[760]: New session 5 of user zuul. Oct 5 02:57:40 localhost systemd[1]: Started Session 5 of User zuul. Oct 5 02:57:40 localhost python3[6191]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-183a-2b8b-000000001d22-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:42 localhost python3[6210]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:42 localhost python3[6226]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:42 localhost python3[6242]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:42 localhost python3[6258]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:43 localhost python3[6274]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 02:57:43 localhost python3[6274]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually Oct 5 02:57:45 localhost python3[6290]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 02:57:45 localhost systemd[1]: Reloading. Oct 5 02:57:45 localhost systemd-rc-local-generator[6308]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 02:57:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 02:57:46 localhost python3[6337]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Oct 5 02:57:48 localhost python3[6353]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:48 localhost python3[6371]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:48 localhost python3[6389]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:49 localhost python3[6407]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:50 localhost python3[6424]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-183a-2b8b-000000001d28-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 02:57:50 localhost python3[6444]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 02:57:53 localhost systemd[1]: session-5.scope: Deactivated successfully. Oct 5 02:57:53 localhost systemd[1]: session-5.scope: Consumed 3.336s CPU time. Oct 5 02:57:53 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Oct 5 02:57:53 localhost systemd-logind[760]: Removed session 5. Oct 5 02:59:12 localhost sshd[6450]: main: sshd: ssh-rsa algorithm is disabled Oct 5 02:59:12 localhost systemd-logind[760]: New session 6 of user zuul. Oct 5 02:59:12 localhost systemd[1]: Started Session 6 of User zuul. Oct 5 02:59:13 localhost systemd[1]: Starting RHSM dbus service... Oct 5 02:59:13 localhost systemd[1]: Started RHSM dbus service. Oct 5 02:59:13 localhost rhsm-service[6474]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:13 localhost rhsm-service[6474]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:13 localhost rhsm-service[6474]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:13 localhost rhsm-service[6474]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:17 localhost rhsm-service[6474]: INFO [subscription_manager.managerlib:90] Consumer created: np0005471152.novalocal (6e63178d-cc41-4e40-9232-0fd1da6b0dc1) Oct 5 02:59:17 localhost subscription-manager[6474]: Registered system with identity: 6e63178d-cc41-4e40-9232-0fd1da6b0dc1 Oct 5 02:59:18 localhost rhsm-service[6474]: INFO [subscription_manager.entcertlib:131] certs updated: Oct 5 02:59:18 localhost rhsm-service[6474]: Total updates: 1 Oct 5 02:59:18 localhost rhsm-service[6474]: Found (local) serial# [] Oct 5 02:59:18 localhost rhsm-service[6474]: Expected (UEP) serial# [3423798336380843583] Oct 5 02:59:18 localhost rhsm-service[6474]: Added (new) Oct 5 02:59:18 localhost rhsm-service[6474]: [sn:3423798336380843583 ( Content Access,) @ /etc/pki/entitlement/3423798336380843583.pem] Oct 5 02:59:18 localhost rhsm-service[6474]: Deleted (rogue): Oct 5 02:59:18 localhost rhsm-service[6474]: Oct 5 02:59:18 localhost subscription-manager[6474]: Added subscription for 'Content Access' contract 'None' Oct 5 02:59:18 localhost subscription-manager[6474]: Added subscription for product ' Content Access' Oct 5 02:59:19 localhost rhsm-service[6474]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:19 localhost rhsm-service[6474]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Oct 5 02:59:19 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:19 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:20 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:20 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:20 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 02:59:22 localhost python3[6565]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163efc-24cc-069e-d673-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:00:14 localhost python3[6584]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:00:53 localhost setsebool[6659]: The virt_use_nfs policy boolean was changed to 1 by root Oct 5 03:00:53 localhost setsebool[6659]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Oct 5 03:01:01 localhost kernel: SELinux: Converting 409 SID table entries... Oct 5 03:01:01 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:01:01 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:01:01 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:01:01 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:01:01 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:01:01 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:01:01 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:01:14 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=3 res=1 Oct 5 03:01:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:01:14 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:01:14 localhost systemd[1]: Reloading. Oct 5 03:01:14 localhost systemd-rc-local-generator[7529]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:01:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:01:14 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:01:15 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:01:22 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:01:22 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:01:22 localhost systemd[1]: man-db-cache-update.service: Consumed 9.913s CPU time. Oct 5 03:01:22 localhost systemd[1]: run-rf3d374d898e34f06bf470619d6c4b6cd.service: Deactivated successfully. Oct 5 03:01:30 localhost systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck485221427-merged.mount: Deactivated successfully. Oct 5 03:01:30 localhost podman[18265]: 2025-10-05 07:01:30.749021714 +0000 UTC m=+0.101549664 system refresh Oct 5 03:01:31 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:01:31 localhost systemd[4182]: Starting D-Bus User Message Bus... Oct 5 03:01:31 localhost dbus-broker-launch[18324]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Oct 5 03:01:31 localhost dbus-broker-launch[18324]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Oct 5 03:01:31 localhost systemd[4182]: Started D-Bus User Message Bus. Oct 5 03:01:31 localhost journal[18324]: Ready Oct 5 03:01:31 localhost systemd[4182]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Oct 5 03:01:31 localhost systemd[4182]: Created slice Slice /user. Oct 5 03:01:31 localhost systemd[4182]: podman-18305.scope: unit configures an IP firewall, but not running as root. Oct 5 03:01:31 localhost systemd[4182]: (This warning is only shown for the first unit using IP firewalling.) Oct 5 03:01:31 localhost systemd[4182]: Started podman-18305.scope. Oct 5 03:01:32 localhost systemd[4182]: Started podman-pause-da6aae51.scope. Oct 5 03:01:32 localhost systemd[1]: session-6.scope: Deactivated successfully. Oct 5 03:01:32 localhost systemd[1]: session-6.scope: Consumed 49.680s CPU time. Oct 5 03:01:32 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Oct 5 03:01:32 localhost systemd-logind[760]: Removed session 6. Oct 5 03:01:47 localhost sshd[18330]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18328]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18326]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18329]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:47 localhost sshd[18327]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:52 localhost sshd[18336]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:01:52 localhost systemd-logind[760]: New session 7 of user zuul. Oct 5 03:01:52 localhost systemd[1]: Started Session 7 of User zuul. Oct 5 03:01:52 localhost python3[18353]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP87f0TJ5B+7awiBdUKl49eWXdyOF8cjINJofgf8ukEJzb/lAYySAOznll5JFt00uw/yZng5hSo6312SA6R3VqM= zuul@np0005471143.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:01:53 localhost python3[18369]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP87f0TJ5B+7awiBdUKl49eWXdyOF8cjINJofgf8ukEJzb/lAYySAOznll5JFt00uw/yZng5hSo6312SA6R3VqM= zuul@np0005471143.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:01:54 localhost systemd[1]: session-7.scope: Deactivated successfully. Oct 5 03:01:54 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Oct 5 03:01:54 localhost systemd-logind[760]: Removed session 7. Oct 5 03:03:18 localhost sshd[18372]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:03:18 localhost systemd-logind[760]: New session 8 of user zuul. Oct 5 03:03:18 localhost systemd[1]: Started Session 8 of User zuul. Oct 5 03:03:18 localhost python3[18391]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:03:19 localhost python3[18407]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471152.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 5 03:03:21 localhost python3[18457]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:21 localhost python3[18500]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759647800.867005-140-230454038977122/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa follow=False checksum=cf09eb456a314382f639138519dc421f9df58c1f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:22 localhost python3[18562]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:23 localhost python3[18605]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759647802.5529718-227-63489988698958/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=7f1aa78692d846b294ef5fe66a5a98ad_id_rsa.pub follow=False checksum=eb73baa214aed5877413178ed76ec0f476520beb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:25 localhost python3[18635]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:26 localhost python3[18681]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:26 localhost python3[18697]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpcq8emx4c recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:27 localhost python3[18757]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:28 localhost python3[18773]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpf_5dwlwd recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:29 localhost python3[18833]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:03:30 localhost python3[18849]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpg6rahcmo recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:03:30 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Oct 5 03:03:30 localhost systemd[1]: session-8.scope: Deactivated successfully. Oct 5 03:03:30 localhost systemd[1]: session-8.scope: Consumed 3.532s CPU time. Oct 5 03:03:30 localhost systemd-logind[760]: Removed session 8. Oct 5 03:05:34 localhost sshd[18865]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:05:34 localhost systemd-logind[760]: New session 9 of user zuul. Oct 5 03:05:34 localhost systemd[1]: Started Session 9 of User zuul. Oct 5 03:05:35 localhost python3[18911]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:10:34 localhost systemd[1]: session-9.scope: Deactivated successfully. Oct 5 03:10:34 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Oct 5 03:10:34 localhost systemd-logind[760]: Removed session 9. Oct 5 03:11:35 localhost sshd[18914]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:15:50 localhost sshd[18918]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:15:50 localhost systemd-logind[760]: New session 10 of user zuul. Oct 5 03:15:50 localhost systemd[1]: Started Session 10 of User zuul. Oct 5 03:15:50 localhost python3[18935]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163efc-24cc-4d81-b09d-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:15:53 localhost python3[18955]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163efc-24cc-4d81-b09d-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:16:24 localhost python3[18975]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Oct 5 03:16:27 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:16:58 localhost python3[19131]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Oct 5 03:17:01 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:01 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:19 localhost python3[19332]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Oct 5 03:17:22 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:22 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:28 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:28 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:49 localhost python3[19667]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Oct 5 03:17:52 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:52 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:58 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:17:58 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:21 localhost python3[20062]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Oct 5 03:18:24 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:24 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:29 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:29 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:18:39 localhost systemd[1]: Starting dnf makecache... Oct 5 03:18:39 localhost python3[20459]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:18:39 localhost dnf[20460]: Updating Subscription Management repositories. Oct 5 03:18:41 localhost dnf[20460]: Failed determining last makecache time. Oct 5 03:18:42 localhost dnf[20460]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 42 MB/s | 24 MB 00:00 Oct 5 03:18:42 localhost dnf[20460]: Fast Datapath for RHEL 9 x86_64 (RPMs) 1.3 MB/s | 560 kB 00:00 Oct 5 03:18:43 localhost dnf[20460]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 60 MB/s | 42 MB 00:00 Oct 5 03:18:43 localhost dnf[20460]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 27 MB/s | 14 MB 00:00 Oct 5 03:18:44 localhost dnf[20460]: Red Hat Enterprise Linux 9 for x86_64 - High Av 6.3 MB/s | 2.5 MB 00:00 Oct 5 03:18:45 localhost dnf[20460]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 64 MB/s | 44 MB 00:00 Oct 5 03:18:45 localhost dnf[20460]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 7.9 MB/s | 3.2 MB 00:00 Oct 5 03:18:46 localhost dnf[20460]: Metadata cache created. Oct 5 03:18:46 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 5 03:18:46 localhost systemd[1]: Finished dnf makecache. Oct 5 03:18:46 localhost systemd[1]: dnf-makecache.service: Consumed 4.117s CPU time. Oct 5 03:19:09 localhost python3[20507]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:19:17 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Oct 5 03:19:25 localhost kernel: SELinux: Converting 501 SID table entries... Oct 5 03:19:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:19:25 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:19:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:19:25 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:19:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:19:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:19:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:19:28 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=4 res=1 Oct 5 03:19:28 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:19:28 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:19:28 localhost systemd[1]: Reloading. Oct 5 03:19:28 localhost systemd-rc-local-generator[21147]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:19:28 localhost systemd-sysv-generator[21153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:19:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:19:28 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:19:29 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:19:29 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:19:29 localhost systemd[1]: run-rbe81bdd6a1254154885d02c14ee3d0e0.service: Deactivated successfully. Oct 5 03:19:30 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:19:30 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 03:19:45 localhost python3[21711]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:19:59 localhost python3[21731]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:20:01 localhost python3[21779]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:20:02 localhost python3[21822]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759648801.6595607-335-33856675197035/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:20:04 localhost python3[21852]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:04 localhost systemd-journald[618]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 91.6 (305 of 333 items), suggesting rotation. Oct 5 03:20:04 localhost systemd-journald[618]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 03:20:04 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:20:04 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:20:04 localhost python3[21873]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:05 localhost python3[21893]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:05 localhost python3[21913]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:05 localhost python3[21933]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Oct 5 03:20:09 localhost python3[21953]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:20:09 localhost systemd[1]: Starting LSB: Bring up/down networking... Oct 5 03:20:09 localhost network[21956]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 03:20:09 localhost network[21967]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 03:20:09 localhost network[21956]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:09 localhost network[21968]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:09 localhost network[21956]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Oct 5 03:20:09 localhost network[21969]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 03:20:09 localhost NetworkManager[5970]: [1759648809.6771] audit: op="connections-reload" pid=21997 uid=0 result="success" Oct 5 03:20:09 localhost network[21956]: Bringing up loopback interface: [ OK ] Oct 5 03:20:09 localhost NetworkManager[5970]: [1759648809.8630] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22085 uid=0 result="success" Oct 5 03:20:09 localhost network[21956]: Bringing up interface eth0: [ OK ] Oct 5 03:20:09 localhost systemd[1]: Started LSB: Bring up/down networking. Oct 5 03:20:10 localhost python3[22126]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:20:10 localhost systemd[1]: Starting Open vSwitch Database Unit... Oct 5 03:20:10 localhost chown[22130]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Oct 5 03:20:10 localhost ovs-ctl[22135]: /etc/openvswitch/conf.db does not exist ... (warning). Oct 5 03:20:10 localhost ovs-ctl[22135]: Creating empty database /etc/openvswitch/conf.db [ OK ] Oct 5 03:20:10 localhost ovs-ctl[22135]: Starting ovsdb-server [ OK ] Oct 5 03:20:10 localhost ovs-vsctl[22184]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Oct 5 03:20:10 localhost ovs-vsctl[22204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-110.el9fdp "external-ids:system-id=\"c2abb7f3-ae8d-4817-a99b-01536f41e92b\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Oct 5 03:20:10 localhost ovs-ctl[22135]: Configuring Open vSwitch system IDs [ OK ] Oct 5 03:20:10 localhost ovs-ctl[22135]: Enabling remote OVSDB managers [ OK ] Oct 5 03:20:10 localhost systemd[1]: Started Open vSwitch Database Unit. Oct 5 03:20:10 localhost ovs-vsctl[22210]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005471152.novalocal Oct 5 03:20:10 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Oct 5 03:20:10 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Oct 5 03:20:10 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Oct 5 03:20:10 localhost kernel: openvswitch: Open vSwitch switching datapath Oct 5 03:20:10 localhost ovs-ctl[22254]: Inserting openvswitch module [ OK ] Oct 5 03:20:10 localhost ovs-ctl[22223]: Starting ovs-vswitchd [ OK ] Oct 5 03:20:10 localhost ovs-ctl[22223]: Enabling remote OVSDB managers [ OK ] Oct 5 03:20:10 localhost ovs-vsctl[22273]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005471152.novalocal Oct 5 03:20:10 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Oct 5 03:20:10 localhost systemd[1]: Starting Open vSwitch... Oct 5 03:20:10 localhost systemd[1]: Finished Open vSwitch. Oct 5 03:20:41 localhost python3[22292]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:20:42 localhost NetworkManager[5970]: [1759648842.2600] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22451 uid=0 result="success" Oct 5 03:20:42 localhost ifup[22452]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:42 localhost ifup[22453]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:42 localhost ifup[22454]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:42 localhost NetworkManager[5970]: [1759648842.2912] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22460 uid=0 result="success" Oct 5 03:20:42 localhost ovs-vsctl[22462]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:31:b6:99 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Oct 5 03:20:42 localhost kernel: device ovs-system entered promiscuous mode Oct 5 03:20:42 localhost NetworkManager[5970]: [1759648842.3182] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Oct 5 03:20:42 localhost kernel: Timeout policy base is empty Oct 5 03:20:42 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Oct 5 03:20:42 localhost systemd-udevd[22463]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:42 localhost systemd-udevd[22478]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:42 localhost kernel: device br-ex entered promiscuous mode Oct 5 03:20:42 localhost NetworkManager[5970]: [1759648842.3540] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Oct 5 03:20:42 localhost NetworkManager[5970]: [1759648842.3806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22488 uid=0 result="success" Oct 5 03:20:42 localhost NetworkManager[5970]: [1759648842.4002] device (br-ex): carrier: link connected Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.4518] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22517 uid=0 result="success" Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.4990] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22532 uid=0 result="success" Oct 5 03:20:45 localhost NET[22557]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.5865] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6009] dhcp4 (eth1): canceled DHCP transaction Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6009] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6010] dhcp4 (eth1): state changed no lease Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6052] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22566 uid=0 result="success" Oct 5 03:20:45 localhost ifup[22567]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:45 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 03:20:45 localhost ifup[22568]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:45 localhost ifup[22570]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:45 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6427] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22584 uid=0 result="success" Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6905] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22594 uid=0 result="success" Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.6966] device (eth1): carrier: link connected Oct 5 03:20:45 localhost NetworkManager[5970]: [1759648845.7143] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22603 uid=0 result="success" Oct 5 03:20:45 localhost ipv6_wait_tentative[22615]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Oct 5 03:20:46 localhost ipv6_wait_tentative[22620]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Oct 5 03:20:47 localhost NetworkManager[5970]: [1759648847.7832] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22629 uid=0 result="success" Oct 5 03:20:47 localhost ovs-vsctl[22644]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Oct 5 03:20:47 localhost kernel: device eth1 entered promiscuous mode Oct 5 03:20:47 localhost NetworkManager[5970]: [1759648847.8559] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22652 uid=0 result="success" Oct 5 03:20:47 localhost ifup[22653]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:47 localhost ifup[22654]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:47 localhost ifup[22655]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:47 localhost NetworkManager[5970]: [1759648847.8875] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22661 uid=0 result="success" Oct 5 03:20:47 localhost NetworkManager[5970]: [1759648847.9281] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22671 uid=0 result="success" Oct 5 03:20:47 localhost ifup[22672]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:47 localhost ifup[22673]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:47 localhost ifup[22674]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:47 localhost NetworkManager[5970]: [1759648847.9602] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22680 uid=0 result="success" Oct 5 03:20:47 localhost ovs-vsctl[22683]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Oct 5 03:20:48 localhost kernel: device vlan22 entered promiscuous mode Oct 5 03:20:48 localhost NetworkManager[5970]: [1759648848.0020] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Oct 5 03:20:48 localhost systemd-udevd[22685]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:48 localhost NetworkManager[5970]: [1759648848.0275] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22694 uid=0 result="success" Oct 5 03:20:48 localhost NetworkManager[5970]: [1759648848.0468] device (vlan22): carrier: link connected Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.0963] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22723 uid=0 result="success" Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.1449] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22738 uid=0 result="success" Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.2034] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22759 uid=0 result="success" Oct 5 03:20:51 localhost ifup[22760]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:51 localhost ifup[22761]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:51 localhost ifup[22762]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.2361] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22768 uid=0 result="success" Oct 5 03:20:51 localhost ovs-vsctl[22771]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Oct 5 03:20:51 localhost kernel: device vlan20 entered promiscuous mode Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.2799] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Oct 5 03:20:51 localhost systemd-udevd[22773]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.3061] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22783 uid=0 result="success" Oct 5 03:20:51 localhost NetworkManager[5970]: [1759648851.3255] device (vlan20): carrier: link connected Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.3752] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22813 uid=0 result="success" Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.4209] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22828 uid=0 result="success" Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.4785] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22849 uid=0 result="success" Oct 5 03:20:54 localhost ifup[22850]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:54 localhost ifup[22851]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:54 localhost ifup[22852]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.5093] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22858 uid=0 result="success" Oct 5 03:20:54 localhost ovs-vsctl[22861]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Oct 5 03:20:54 localhost systemd-udevd[22863]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:54 localhost kernel: device vlan21 entered promiscuous mode Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.5757] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.6017] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22873 uid=0 result="success" Oct 5 03:20:54 localhost NetworkManager[5970]: [1759648854.6206] device (vlan21): carrier: link connected Oct 5 03:20:55 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.6810] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22903 uid=0 result="success" Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.7181] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22918 uid=0 result="success" Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.7740] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22939 uid=0 result="success" Oct 5 03:20:57 localhost ifup[22940]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:20:57 localhost ifup[22941]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:20:57 localhost ifup[22942]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.8053] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22948 uid=0 result="success" Oct 5 03:20:57 localhost ovs-vsctl[22951]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Oct 5 03:20:57 localhost kernel: device vlan44 entered promiscuous mode Oct 5 03:20:57 localhost systemd-udevd[22953]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.8823] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.9069] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22963 uid=0 result="success" Oct 5 03:20:57 localhost NetworkManager[5970]: [1759648857.9274] device (vlan44): carrier: link connected Oct 5 03:21:00 localhost NetworkManager[5970]: [1759648860.9802] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22993 uid=0 result="success" Oct 5 03:21:01 localhost NetworkManager[5970]: [1759648861.0275] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23008 uid=0 result="success" Oct 5 03:21:01 localhost NetworkManager[5970]: [1759648861.0878] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23029 uid=0 result="success" Oct 5 03:21:01 localhost ifup[23030]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:21:01 localhost ifup[23031]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:21:01 localhost ifup[23032]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:21:01 localhost NetworkManager[5970]: [1759648861.1200] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23038 uid=0 result="success" Oct 5 03:21:01 localhost ovs-vsctl[23041]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Oct 5 03:21:01 localhost systemd-udevd[23043]: Network interface NamePolicy= disabled on kernel command line. Oct 5 03:21:01 localhost kernel: device vlan23 entered promiscuous mode Oct 5 03:21:01 localhost NetworkManager[5970]: [1759648861.1597] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Oct 5 03:21:01 localhost NetworkManager[5970]: [1759648861.1854] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23053 uid=0 result="success" Oct 5 03:21:01 localhost NetworkManager[5970]: [1759648861.2057] device (vlan23): carrier: link connected Oct 5 03:21:04 localhost NetworkManager[5970]: [1759648864.2549] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23083 uid=0 result="success" Oct 5 03:21:04 localhost NetworkManager[5970]: [1759648864.3020] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23098 uid=0 result="success" Oct 5 03:21:04 localhost NetworkManager[5970]: [1759648864.3574] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23119 uid=0 result="success" Oct 5 03:21:04 localhost ifup[23120]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:21:04 localhost ifup[23121]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:21:04 localhost ifup[23122]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:21:04 localhost NetworkManager[5970]: [1759648864.3867] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23128 uid=0 result="success" Oct 5 03:21:04 localhost ovs-vsctl[23131]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Oct 5 03:21:04 localhost NetworkManager[5970]: [1759648864.4437] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23138 uid=0 result="success" Oct 5 03:21:05 localhost NetworkManager[5970]: [1759648865.5004] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23165 uid=0 result="success" Oct 5 03:21:05 localhost NetworkManager[5970]: [1759648865.5465] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23180 uid=0 result="success" Oct 5 03:21:05 localhost NetworkManager[5970]: [1759648865.6023] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23201 uid=0 result="success" Oct 5 03:21:05 localhost ifup[23202]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:21:05 localhost ifup[23203]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:21:05 localhost ifup[23204]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:21:05 localhost NetworkManager[5970]: [1759648865.6323] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23210 uid=0 result="success" Oct 5 03:21:05 localhost ovs-vsctl[23213]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Oct 5 03:21:05 localhost NetworkManager[5970]: [1759648865.6851] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23220 uid=0 result="success" Oct 5 03:21:06 localhost NetworkManager[5970]: [1759648866.7394] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23248 uid=0 result="success" Oct 5 03:21:06 localhost NetworkManager[5970]: [1759648866.7856] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23263 uid=0 result="success" Oct 5 03:21:06 localhost NetworkManager[5970]: [1759648866.8428] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23284 uid=0 result="success" Oct 5 03:21:06 localhost ifup[23285]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:21:06 localhost ifup[23286]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:21:06 localhost ifup[23287]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:21:06 localhost NetworkManager[5970]: [1759648866.8731] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23293 uid=0 result="success" Oct 5 03:21:06 localhost ovs-vsctl[23296]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Oct 5 03:21:06 localhost NetworkManager[5970]: [1759648866.9294] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23303 uid=0 result="success" Oct 5 03:21:07 localhost NetworkManager[5970]: [1759648867.9903] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23331 uid=0 result="success" Oct 5 03:21:08 localhost NetworkManager[5970]: [1759648868.0352] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23346 uid=0 result="success" Oct 5 03:21:08 localhost NetworkManager[5970]: [1759648868.0942] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23367 uid=0 result="success" Oct 5 03:21:08 localhost ifup[23368]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:21:08 localhost ifup[23369]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:21:08 localhost ifup[23370]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:21:08 localhost NetworkManager[5970]: [1759648868.1251] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23376 uid=0 result="success" Oct 5 03:21:08 localhost ovs-vsctl[23379]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Oct 5 03:21:08 localhost NetworkManager[5970]: [1759648868.1800] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23386 uid=0 result="success" Oct 5 03:21:09 localhost NetworkManager[5970]: [1759648869.2372] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23414 uid=0 result="success" Oct 5 03:21:09 localhost NetworkManager[5970]: [1759648869.2825] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23429 uid=0 result="success" Oct 5 03:21:09 localhost NetworkManager[5970]: [1759648869.3404] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23450 uid=0 result="success" Oct 5 03:21:09 localhost ifup[23451]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Oct 5 03:21:09 localhost ifup[23452]: 'network-scripts' will be removed from distribution in near future. Oct 5 03:21:09 localhost ifup[23453]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Oct 5 03:21:09 localhost NetworkManager[5970]: [1759648869.3721] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23459 uid=0 result="success" Oct 5 03:21:09 localhost ovs-vsctl[23462]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Oct 5 03:21:09 localhost NetworkManager[5970]: [1759648869.4284] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23469 uid=0 result="success" Oct 5 03:21:10 localhost NetworkManager[5970]: [1759648870.4837] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23497 uid=0 result="success" Oct 5 03:21:10 localhost NetworkManager[5970]: [1759648870.5286] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23512 uid=0 result="success" Oct 5 03:21:16 localhost sshd[23530]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:21:36 localhost python3[23546]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:21:41 localhost python3[23565]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:41 localhost python3[23581]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:43 localhost python3[23595]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:43 localhost python3[23611]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Oct 5 03:21:44 localhost python3[23625]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Oct 5 03:21:45 localhost python3[23640]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005471152.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:21:46 localhost python3[23660]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4d81-b09d-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:21:46 localhost systemd[1]: Starting Hostname Service... Oct 5 03:21:46 localhost systemd[1]: Started Hostname Service. Oct 5 03:21:46 localhost systemd-hostnamed[23664]: Hostname set to (static) Oct 5 03:21:46 localhost NetworkManager[5970]: [1759648906.2751] hostname: static hostname changed from "np0005471152.novalocal" to "np0005471152.localdomain" Oct 5 03:21:46 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Oct 5 03:21:46 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Oct 5 03:21:47 localhost systemd[1]: session-10.scope: Deactivated successfully. Oct 5 03:21:47 localhost systemd[1]: session-10.scope: Consumed 1min 40.402s CPU time. Oct 5 03:21:47 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Oct 5 03:21:47 localhost systemd-logind[760]: Removed session 10. Oct 5 03:21:50 localhost sshd[23675]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:21:51 localhost systemd-logind[760]: New session 11 of user zuul. Oct 5 03:21:51 localhost systemd[1]: Started Session 11 of User zuul. Oct 5 03:21:51 localhost python3[23692]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Oct 5 03:21:52 localhost systemd[1]: session-11.scope: Deactivated successfully. Oct 5 03:21:52 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Oct 5 03:21:52 localhost systemd-logind[760]: Removed session 11. Oct 5 03:21:56 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Oct 5 03:22:16 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 03:22:44 localhost sshd[23696]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:22:44 localhost systemd-logind[760]: New session 12 of user zuul. Oct 5 03:22:44 localhost systemd[1]: Started Session 12 of User zuul. Oct 5 03:22:45 localhost python3[23715]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:22:48 localhost systemd[1]: Reloading. Oct 5 03:22:48 localhost systemd-rc-local-generator[23760]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:48 localhost systemd-sysv-generator[23763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:48 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Oct 5 03:22:49 localhost systemd[1]: Reloading. Oct 5 03:22:49 localhost systemd-rc-local-generator[23797]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:49 localhost systemd-sysv-generator[23803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:49 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Oct 5 03:22:49 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Oct 5 03:22:49 localhost systemd[1]: Reloading. Oct 5 03:22:49 localhost systemd-sysv-generator[23839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:49 localhost systemd-rc-local-generator[23836]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:49 localhost systemd[1]: Listening on LVM2 poll daemon socket. Oct 5 03:22:49 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:22:49 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:22:49 localhost systemd[1]: Reloading. Oct 5 03:22:50 localhost systemd-sysv-generator[23901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:22:50 localhost systemd-rc-local-generator[23896]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:22:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:22:50 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:22:50 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:22:50 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:22:50 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:22:50 localhost systemd[1]: run-r775f791931fb42768834d11541a2f73e.service: Deactivated successfully. Oct 5 03:22:50 localhost systemd[1]: run-rd6c4282cbd8d43e6a323eee423de3509.service: Deactivated successfully. Oct 5 03:23:51 localhost systemd[1]: session-12.scope: Deactivated successfully. Oct 5 03:23:51 localhost systemd[1]: session-12.scope: Consumed 4.454s CPU time. Oct 5 03:23:51 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Oct 5 03:23:51 localhost systemd-logind[760]: Removed session 12. Oct 5 03:33:11 localhost sshd[24488]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:33:12 localhost sshd[24489]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:34:44 localhost sshd[24491]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:39:32 localhost sshd[24493]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:39:32 localhost sshd[24494]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:40:05 localhost sshd[24498]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:40:05 localhost systemd-logind[760]: New session 13 of user zuul. Oct 5 03:40:05 localhost systemd[1]: Started Session 13 of User zuul. Oct 5 03:40:06 localhost python3[24546]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:40:08 localhost python3[24633]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:40:11 localhost python3[24650]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:40:11 localhost python3[24666]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:11 localhost kernel: loop: module loaded Oct 5 03:40:11 localhost kernel: loop3: detected capacity change from 0 to 14680064 Oct 5 03:40:12 localhost python3[24691]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:12 localhost lvm[24694]: PV /dev/loop3 not used. Oct 5 03:40:12 localhost lvm[24696]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:40:12 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Oct 5 03:40:12 localhost lvm[24703]: 1 logical volume(s) in volume group "ceph_vg0" now active Oct 5 03:40:12 localhost lvm[24706]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:40:12 localhost lvm[24706]: VG ceph_vg0 finished Oct 5 03:40:12 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Oct 5 03:40:13 localhost python3[24755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:13 localhost python3[24798]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650013.0418186-55307-55728127481515/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:14 localhost python3[24828]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:14 localhost systemd[1]: Reloading. Oct 5 03:40:14 localhost systemd-sysv-generator[24855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:14 localhost systemd-rc-local-generator[24851]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:15 localhost systemd[1]: Starting Ceph OSD losetup... Oct 5 03:40:15 localhost bash[24868]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img) Oct 5 03:40:15 localhost systemd[1]: Finished Ceph OSD losetup. Oct 5 03:40:15 localhost lvm[24869]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:40:15 localhost lvm[24869]: VG ceph_vg0 finished Oct 5 03:40:15 localhost python3[24885]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:40:18 localhost python3[24902]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:40:19 localhost python3[24918]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:19 localhost kernel: loop4: detected capacity change from 0 to 14680064 Oct 5 03:40:19 localhost python3[24940]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:19 localhost lvm[24943]: PV /dev/loop4 not used. Oct 5 03:40:19 localhost lvm[24945]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:40:19 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Oct 5 03:40:19 localhost lvm[24953]: 1 logical volume(s) in volume group "ceph_vg1" now active Oct 5 03:40:19 localhost lvm[24956]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:40:19 localhost lvm[24956]: VG ceph_vg1 finished Oct 5 03:40:19 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Oct 5 03:40:20 localhost python3[25005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:20 localhost python3[25048]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650020.2327604-55527-236310031317900/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:21 localhost python3[25078]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:21 localhost systemd[1]: Reloading. Oct 5 03:40:21 localhost systemd-sysv-generator[25110]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:21 localhost systemd-rc-local-generator[25107]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:21 localhost systemd[1]: Starting Ceph OSD losetup... Oct 5 03:40:21 localhost bash[25119]: /dev/loop4: [64516]:8606979 (/var/lib/ceph-osd-1.img) Oct 5 03:40:21 localhost systemd[1]: Finished Ceph OSD losetup. Oct 5 03:40:21 localhost lvm[25120]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:40:21 localhost lvm[25120]: VG ceph_vg1 finished Oct 5 03:40:31 localhost python3[25165]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:40:32 localhost python3[25185]: ansible-hostname Invoked with name=np0005471152.localdomain use=None Oct 5 03:40:32 localhost systemd[1]: Starting Hostname Service... Oct 5 03:40:32 localhost systemd[1]: Started Hostname Service. Oct 5 03:40:36 localhost python3[25208]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Oct 5 03:40:36 localhost python3[25256]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.01y9882ftmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:37 localhost python3[25286]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.01y9882ftmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:37 localhost python3[25302]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.01y9882ftmphosts insertbefore=BOF block=192.168.122.106 np0005471150.localdomain np0005471150#012192.168.122.106 np0005471150.ctlplane.localdomain np0005471150.ctlplane#012192.168.122.107 np0005471151.localdomain np0005471151#012192.168.122.107 np0005471151.ctlplane.localdomain np0005471151.ctlplane#012192.168.122.108 np0005471152.localdomain np0005471152#012192.168.122.108 np0005471152.ctlplane.localdomain np0005471152.ctlplane#012192.168.122.103 np0005471146.localdomain np0005471146#012192.168.122.103 np0005471146.ctlplane.localdomain np0005471146.ctlplane#012192.168.122.104 np0005471147.localdomain np0005471147#012192.168.122.104 np0005471147.ctlplane.localdomain np0005471147.ctlplane#012192.168.122.105 np0005471148.localdomain np0005471148#012192.168.122.105 np0005471148.ctlplane.localdomain np0005471148.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:38 localhost python3[25318]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.01y9882ftmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:38 localhost python3[25335]: ansible-file Invoked with path=/tmp/ansible.01y9882ftmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:40 localhost python3[25351]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:42 localhost python3[25369]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:40:47 localhost python3[25418]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:47 localhost python3[25463]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650046.6428776-56333-267166235296535/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:48 localhost python3[25493]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:50 localhost python3[25511]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:40:50 localhost chronyd[766]: chronyd exiting Oct 5 03:40:50 localhost systemd[1]: Stopping NTP client/server... Oct 5 03:40:50 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 5 03:40:50 localhost systemd[1]: Stopped NTP client/server. Oct 5 03:40:50 localhost systemd[1]: chronyd.service: Consumed 108ms CPU time, read 1.9M from disk, written 0B to disk. Oct 5 03:40:50 localhost systemd[1]: Starting NTP client/server... Oct 5 03:40:50 localhost chronyd[25518]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 03:40:50 localhost chronyd[25518]: Frequency -26.390 +/- 0.182 ppm read from /var/lib/chrony/drift Oct 5 03:40:50 localhost chronyd[25518]: Loaded seccomp filter (level 2) Oct 5 03:40:50 localhost systemd[1]: Started NTP client/server. Oct 5 03:40:52 localhost python3[25567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:40:52 localhost python3[25610]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759650052.0543213-56739-196907869262491/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:40:53 localhost python3[25640]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:40:53 localhost systemd[1]: Reloading. Oct 5 03:40:53 localhost systemd-rc-local-generator[25662]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:53 localhost systemd-sysv-generator[25670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:53 localhost systemd[1]: Reloading. Oct 5 03:40:53 localhost systemd-rc-local-generator[25701]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:40:53 localhost systemd-sysv-generator[25704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:40:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:40:53 localhost systemd[1]: Starting chronyd online sources service... Oct 5 03:40:53 localhost chronyc[25716]: 200 OK Oct 5 03:40:53 localhost systemd[1]: chrony-online.service: Deactivated successfully. Oct 5 03:40:53 localhost systemd[1]: Finished chronyd online sources service. Oct 5 03:40:54 localhost python3[25732]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:54 localhost chronyd[25518]: System clock was stepped by 0.000000 seconds Oct 5 03:40:54 localhost python3[25749]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:40:56 localhost chronyd[25518]: Selected source 138.197.135.239 (pool.ntp.org) Oct 5 03:41:02 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 5 03:41:05 localhost python3[25769]: ansible-timezone Invoked with name=UTC hwclock=None Oct 5 03:41:05 localhost systemd[1]: Starting Time & Date Service... Oct 5 03:41:05 localhost systemd[1]: Started Time & Date Service. Oct 5 03:41:06 localhost python3[25789]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:41:06 localhost chronyd[25518]: chronyd exiting Oct 5 03:41:06 localhost systemd[1]: Stopping NTP client/server... Oct 5 03:41:06 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 5 03:41:06 localhost systemd[1]: Stopped NTP client/server. Oct 5 03:41:06 localhost systemd[1]: Starting NTP client/server... Oct 5 03:41:06 localhost chronyd[25796]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 03:41:06 localhost chronyd[25796]: Frequency -26.390 +/- 0.182 ppm read from /var/lib/chrony/drift Oct 5 03:41:06 localhost chronyd[25796]: Loaded seccomp filter (level 2) Oct 5 03:41:06 localhost systemd[1]: Started NTP client/server. Oct 5 03:41:10 localhost chronyd[25796]: Selected source 208.81.1.244 (pool.ntp.org) Oct 5 03:41:35 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 03:43:16 localhost sshd[25993]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:16 localhost systemd-logind[760]: New session 14 of user ceph-admin. Oct 5 03:43:16 localhost systemd[1]: Created slice User Slice of UID 1002. Oct 5 03:43:16 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Oct 5 03:43:16 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Oct 5 03:43:16 localhost systemd[1]: Starting User Manager for UID 1002... Oct 5 03:43:16 localhost sshd[26010]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:16 localhost systemd[25997]: Queued start job for default target Main User Target. Oct 5 03:43:16 localhost systemd[25997]: Created slice User Application Slice. Oct 5 03:43:16 localhost systemd[25997]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 03:43:16 localhost systemd[25997]: Started Daily Cleanup of User's Temporary Directories. Oct 5 03:43:16 localhost systemd[25997]: Reached target Paths. Oct 5 03:43:16 localhost systemd[25997]: Reached target Timers. Oct 5 03:43:16 localhost systemd[25997]: Starting D-Bus User Message Bus Socket... Oct 5 03:43:16 localhost systemd[25997]: Starting Create User's Volatile Files and Directories... Oct 5 03:43:16 localhost systemd[25997]: Listening on D-Bus User Message Bus Socket. Oct 5 03:43:16 localhost systemd[25997]: Reached target Sockets. Oct 5 03:43:16 localhost systemd[25997]: Finished Create User's Volatile Files and Directories. Oct 5 03:43:16 localhost systemd[25997]: Reached target Basic System. Oct 5 03:43:16 localhost systemd[1]: Started User Manager for UID 1002. Oct 5 03:43:16 localhost systemd[25997]: Reached target Main User Target. Oct 5 03:43:16 localhost systemd[25997]: Startup finished in 120ms. Oct 5 03:43:16 localhost systemd[1]: Started Session 14 of User ceph-admin. Oct 5 03:43:16 localhost systemd-logind[760]: New session 16 of user ceph-admin. Oct 5 03:43:16 localhost systemd[1]: Started Session 16 of User ceph-admin. Oct 5 03:43:16 localhost sshd[26032]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:16 localhost systemd-logind[760]: New session 17 of user ceph-admin. Oct 5 03:43:16 localhost systemd[1]: Started Session 17 of User ceph-admin. Oct 5 03:43:17 localhost sshd[26051]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:17 localhost systemd-logind[760]: New session 18 of user ceph-admin. Oct 5 03:43:17 localhost systemd[1]: Started Session 18 of User ceph-admin. Oct 5 03:43:17 localhost sshd[26070]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:17 localhost systemd-logind[760]: New session 19 of user ceph-admin. Oct 5 03:43:17 localhost systemd[1]: Started Session 19 of User ceph-admin. Oct 5 03:43:17 localhost sshd[26089]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:17 localhost systemd-logind[760]: New session 20 of user ceph-admin. Oct 5 03:43:17 localhost systemd[1]: Started Session 20 of User ceph-admin. Oct 5 03:43:18 localhost sshd[26108]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:18 localhost systemd-logind[760]: New session 21 of user ceph-admin. Oct 5 03:43:18 localhost systemd[1]: Started Session 21 of User ceph-admin. Oct 5 03:43:18 localhost sshd[26127]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:18 localhost systemd-logind[760]: New session 22 of user ceph-admin. Oct 5 03:43:18 localhost systemd[1]: Started Session 22 of User ceph-admin. Oct 5 03:43:19 localhost sshd[26146]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:19 localhost systemd-logind[760]: New session 23 of user ceph-admin. Oct 5 03:43:19 localhost systemd[1]: Started Session 23 of User ceph-admin. Oct 5 03:43:19 localhost sshd[26165]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:19 localhost systemd-logind[760]: New session 24 of user ceph-admin. Oct 5 03:43:19 localhost systemd[1]: Started Session 24 of User ceph-admin. Oct 5 03:43:20 localhost sshd[26182]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:20 localhost systemd-logind[760]: New session 25 of user ceph-admin. Oct 5 03:43:20 localhost systemd[1]: Started Session 25 of User ceph-admin. Oct 5 03:43:20 localhost sshd[26201]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:43:20 localhost systemd-logind[760]: New session 26 of user ceph-admin. Oct 5 03:43:20 localhost systemd[1]: Started Session 26 of User ceph-admin. Oct 5 03:43:20 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:41 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26417 (sysctl) Oct 5 03:43:41 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Oct 5 03:43:41 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Oct 5 03:43:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:42 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:43:45 localhost kernel: VFS: idmapped mount is not enabled. Oct 5 03:44:06 localhost podman[26556]: Oct 5 03:44:06 localhost podman[26556]: 2025-10-05 07:44:06.869938445 +0000 UTC m=+24.244906675 container create 9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_turing, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main) Oct 5 03:44:06 localhost systemd[1]: Created slice Slice /machine. Oct 5 03:44:06 localhost systemd[1]: Started libpod-conmon-9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4.scope. Oct 5 03:44:06 localhost systemd[1]: Started libcrun container. Oct 5 03:44:06 localhost podman[26556]: 2025-10-05 07:43:42.671512229 +0000 UTC m=+0.046480479 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:06 localhost podman[26556]: 2025-10-05 07:44:06.975818892 +0000 UTC m=+24.350787152 container init 9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_turing, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:06 localhost podman[26556]: 2025-10-05 07:44:06.986270636 +0000 UTC m=+24.361238886 container start 9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_turing, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:44:06 localhost podman[26556]: 2025-10-05 07:44:06.986468732 +0000 UTC m=+24.361436982 container attach 9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_turing, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:06 localhost gracious_turing[26752]: 167 167 Oct 5 03:44:06 localhost systemd[1]: libpod-9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4.scope: Deactivated successfully. Oct 5 03:44:06 localhost podman[26556]: 2025-10-05 07:44:06.990016499 +0000 UTC m=+24.364984749 container died 9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_turing, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:07 localhost podman[26757]: 2025-10-05 07:44:07.081231176 +0000 UTC m=+0.081513274 container remove 9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_turing, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64) Oct 5 03:44:07 localhost systemd[1]: libpod-conmon-9f8363aa637d54f8ae5e8ce57c1c1358116477f9b9d18bf569e5241d7c45c2b4.scope: Deactivated successfully. Oct 5 03:44:07 localhost podman[26954]: Oct 5 03:44:07 localhost podman[26954]: 2025-10-05 07:44:07.316438089 +0000 UTC m=+0.046234611 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:07 localhost systemd[1]: var-lib-containers-storage-overlay-1644209256e67666df4964b489b1f32a74955dd9cfd8ec32da09e5dda88339f4-merged.mount: Deactivated successfully. Oct 5 03:44:11 localhost podman[26954]: 2025-10-05 07:44:11.279906353 +0000 UTC m=+4.009702855 container create 38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_ellis, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, release=553, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux ) Oct 5 03:44:11 localhost systemd[1]: Started libpod-conmon-38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e.scope. Oct 5 03:44:11 localhost systemd[1]: Started libcrun container. Oct 5 03:44:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38758453859c2c83b89764c59e94aa15c95f0252ab15d12037b795bc1cf3aeb/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38758453859c2c83b89764c59e94aa15c95f0252ab15d12037b795bc1cf3aeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:11 localhost podman[26954]: 2025-10-05 07:44:11.406972298 +0000 UTC m=+4.136768800 container init 38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_ellis, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public, release=553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, RELEASE=main, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:11 localhost podman[26954]: 2025-10-05 07:44:11.42537178 +0000 UTC m=+4.155168292 container start 38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_ellis, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, RELEASE=main, release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public) Oct 5 03:44:11 localhost podman[26954]: 2025-10-05 07:44:11.42573082 +0000 UTC m=+4.155527322 container attach 38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_ellis, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64) Oct 5 03:44:12 localhost romantic_ellis[27026]: [ Oct 5 03:44:12 localhost romantic_ellis[27026]: { Oct 5 03:44:12 localhost romantic_ellis[27026]: "available": false, Oct 5 03:44:12 localhost romantic_ellis[27026]: "ceph_device": false, Oct 5 03:44:12 localhost romantic_ellis[27026]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 03:44:12 localhost romantic_ellis[27026]: "lsm_data": {}, Oct 5 03:44:12 localhost romantic_ellis[27026]: "lvs": [], Oct 5 03:44:12 localhost romantic_ellis[27026]: "path": "/dev/sr0", Oct 5 03:44:12 localhost romantic_ellis[27026]: "rejected_reasons": [ Oct 5 03:44:12 localhost romantic_ellis[27026]: "Insufficient space (<5GB)", Oct 5 03:44:12 localhost romantic_ellis[27026]: "Has a FileSystem" Oct 5 03:44:12 localhost romantic_ellis[27026]: ], Oct 5 03:44:12 localhost romantic_ellis[27026]: "sys_api": { Oct 5 03:44:12 localhost romantic_ellis[27026]: "actuators": null, Oct 5 03:44:12 localhost romantic_ellis[27026]: "device_nodes": "sr0", Oct 5 03:44:12 localhost romantic_ellis[27026]: "human_readable_size": "482.00 KB", Oct 5 03:44:12 localhost romantic_ellis[27026]: "id_bus": "ata", Oct 5 03:44:12 localhost romantic_ellis[27026]: "model": "QEMU DVD-ROM", Oct 5 03:44:12 localhost romantic_ellis[27026]: "nr_requests": "2", Oct 5 03:44:12 localhost romantic_ellis[27026]: "partitions": {}, Oct 5 03:44:12 localhost romantic_ellis[27026]: "path": "/dev/sr0", Oct 5 03:44:12 localhost romantic_ellis[27026]: "removable": "1", Oct 5 03:44:12 localhost romantic_ellis[27026]: "rev": "2.5+", Oct 5 03:44:12 localhost romantic_ellis[27026]: "ro": "0", Oct 5 03:44:12 localhost romantic_ellis[27026]: "rotational": "1", Oct 5 03:44:12 localhost romantic_ellis[27026]: "sas_address": "", Oct 5 03:44:12 localhost romantic_ellis[27026]: "sas_device_handle": "", Oct 5 03:44:12 localhost romantic_ellis[27026]: "scheduler_mode": "mq-deadline", Oct 5 03:44:12 localhost romantic_ellis[27026]: "sectors": 0, Oct 5 03:44:12 localhost romantic_ellis[27026]: "sectorsize": "2048", Oct 5 03:44:12 localhost romantic_ellis[27026]: "size": 493568.0, Oct 5 03:44:12 localhost romantic_ellis[27026]: "support_discard": "0", Oct 5 03:44:12 localhost romantic_ellis[27026]: "type": "disk", Oct 5 03:44:12 localhost romantic_ellis[27026]: "vendor": "QEMU" Oct 5 03:44:12 localhost romantic_ellis[27026]: } Oct 5 03:44:12 localhost romantic_ellis[27026]: } Oct 5 03:44:12 localhost romantic_ellis[27026]: ] Oct 5 03:44:12 localhost systemd[1]: libpod-38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e.scope: Deactivated successfully. Oct 5 03:44:12 localhost podman[26954]: 2025-10-05 07:44:12.240478176 +0000 UTC m=+4.970274708 container died 38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_ellis, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, version=7, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:44:12 localhost systemd[1]: var-lib-containers-storage-overlay-f38758453859c2c83b89764c59e94aa15c95f0252ab15d12037b795bc1cf3aeb-merged.mount: Deactivated successfully. Oct 5 03:44:12 localhost podman[28315]: 2025-10-05 07:44:12.323536461 +0000 UTC m=+0.073313771 container remove 38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_ellis, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main) Oct 5 03:44:12 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:12 localhost systemd[1]: libpod-conmon-38805673477d850a05350caeb6b12d207e6d8eda4be8b7a15575b69261d9494e.scope: Deactivated successfully. Oct 5 03:44:12 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Oct 5 03:44:12 localhost systemd[1]: Closed Process Core Dump Socket. Oct 5 03:44:12 localhost systemd[1]: Stopping Process Core Dump Socket... Oct 5 03:44:12 localhost systemd[1]: Listening on Process Core Dump Socket. Oct 5 03:44:12 localhost systemd[1]: Reloading. Oct 5 03:44:12 localhost systemd-rc-local-generator[28395]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:13 localhost systemd-sysv-generator[28402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:13 localhost systemd[1]: Reloading. Oct 5 03:44:13 localhost systemd-rc-local-generator[28434]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:13 localhost systemd-sysv-generator[28440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:39 localhost podman[28519]: Oct 5 03:44:39 localhost podman[28519]: 2025-10-05 07:44:39.289182812 +0000 UTC m=+0.070417188 container create b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_meninsky, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, RELEASE=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True) Oct 5 03:44:39 localhost systemd[1]: Started libpod-conmon-b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9.scope. Oct 5 03:44:39 localhost systemd[1]: Started libcrun container. Oct 5 03:44:39 localhost podman[28519]: 2025-10-05 07:44:39.359402284 +0000 UTC m=+0.140636670 container init b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_meninsky, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:39 localhost podman[28519]: 2025-10-05 07:44:39.262258201 +0000 UTC m=+0.043492597 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:39 localhost podman[28519]: 2025-10-05 07:44:39.367789149 +0000 UTC m=+0.149023525 container start b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_meninsky, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, RELEASE=main, ceph=True, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Oct 5 03:44:39 localhost podman[28519]: 2025-10-05 07:44:39.368084927 +0000 UTC m=+0.149319343 container attach b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_meninsky, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, name=rhceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, release=553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:44:39 localhost relaxed_meninsky[28535]: 167 167 Oct 5 03:44:39 localhost systemd[1]: libpod-b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9.scope: Deactivated successfully. Oct 5 03:44:39 localhost podman[28519]: 2025-10-05 07:44:39.372054182 +0000 UTC m=+0.153288568 container died b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_meninsky, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, version=7, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=553) Oct 5 03:44:39 localhost podman[28540]: 2025-10-05 07:44:39.459430143 +0000 UTC m=+0.072224726 container remove b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_meninsky, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git) Oct 5 03:44:39 localhost systemd[1]: libpod-conmon-b407306954ebd53b3ab36b748d8e9265c674654f7f3f9f097f143def93d438f9.scope: Deactivated successfully. Oct 5 03:44:39 localhost systemd[1]: Reloading. Oct 5 03:44:39 localhost systemd-rc-local-generator[28579]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:39 localhost systemd-sysv-generator[28584]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:39 localhost systemd[1]: Reloading. Oct 5 03:44:39 localhost systemd-rc-local-generator[28616]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:39 localhost systemd-sysv-generator[28619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:39 localhost systemd[1]: Reached target All Ceph clusters and services. Oct 5 03:44:40 localhost systemd[1]: Reloading. Oct 5 03:44:40 localhost systemd-rc-local-generator[28655]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:40 localhost systemd-sysv-generator[28659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:40 localhost systemd[1]: Reached target Ceph cluster 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:40 localhost systemd[1]: Reloading. Oct 5 03:44:40 localhost systemd-rc-local-generator[28696]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:40 localhost systemd-sysv-generator[28701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:40 localhost systemd[1]: Reloading. Oct 5 03:44:40 localhost systemd-rc-local-generator[28738]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:40 localhost systemd-sysv-generator[28741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:40 localhost systemd[1]: Created slice Slice /system/ceph-659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:40 localhost systemd[1]: Reached target System Time Set. Oct 5 03:44:40 localhost systemd[1]: Reached target System Time Synchronized. Oct 5 03:44:40 localhost systemd[1]: Starting Ceph crash.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 03:44:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Oct 5 03:44:40 localhost podman[28798]: Oct 5 03:44:41 localhost podman[28798]: 2025-10-05 07:44:41.000970367 +0000 UTC m=+0.069386350 container create 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vendor=Red Hat, Inc., RELEASE=main, version=7, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_CLEAN=True) Oct 5 03:44:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3710fff904ff5a4b030ac027c7d8166b429a75bee1e7c103cdc6efd93943bc46/merged/etc/ceph/ceph.client.crash.np0005471152.keyring supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3710fff904ff5a4b030ac027c7d8166b429a75bee1e7c103cdc6efd93943bc46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:41 localhost podman[28798]: 2025-10-05 07:44:40.973983414 +0000 UTC m=+0.042399427 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3710fff904ff5a4b030ac027c7d8166b429a75bee1e7c103cdc6efd93943bc46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:41 localhost podman[28798]: 2025-10-05 07:44:41.097528394 +0000 UTC m=+0.165944377 container init 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=) Oct 5 03:44:41 localhost podman[28798]: 2025-10-05 07:44:41.106941056 +0000 UTC m=+0.175357039 container start 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 03:44:41 localhost bash[28798]: 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b Oct 5 03:44:41 localhost systemd[1]: Started Ceph crash.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: INFO:ceph-crash:pinging cluster to exercise our key Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.281+0000 7f1aa8016640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.281+0000 7f1aa8016640 -1 AuthRegistry(0x7f1aa0068980) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.282+0000 7f1aa8016640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.282+0000 7f1aa8016640 -1 AuthRegistry(0x7f1aa8015000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.289+0000 7f1aa658c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.289+0000 7f1aa558a640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.291+0000 7f1aa5d8b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: 2025-10-05T07:44:41.291+0000 7f1aa8016640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: [errno 13] RADOS permission denied (error connecting to the cluster) Oct 5 03:44:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152[28812]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Oct 5 03:44:44 localhost podman[28897]: Oct 5 03:44:44 localhost podman[28897]: 2025-10-05 07:44:44.650145221 +0000 UTC m=+0.069850633 container create 10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_nobel, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12) Oct 5 03:44:44 localhost systemd[1]: Started libpod-conmon-10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b.scope. Oct 5 03:44:44 localhost systemd[1]: Started libcrun container. Oct 5 03:44:44 localhost podman[28897]: 2025-10-05 07:44:44.715276135 +0000 UTC m=+0.134981547 container init 10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_nobel, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux ) Oct 5 03:44:44 localhost podman[28897]: 2025-10-05 07:44:44.622816238 +0000 UTC m=+0.042521710 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:44 localhost systemd[1]: tmp-crun.ujV0wY.mount: Deactivated successfully. Oct 5 03:44:44 localhost podman[28897]: 2025-10-05 07:44:44.728308955 +0000 UTC m=+0.148014367 container start 10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_nobel, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=553, version=7, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc.) Oct 5 03:44:44 localhost podman[28897]: 2025-10-05 07:44:44.728591962 +0000 UTC m=+0.148297375 container attach 10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_nobel, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Oct 5 03:44:44 localhost systemd[1]: libpod-10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b.scope: Deactivated successfully. Oct 5 03:44:44 localhost elated_nobel[28913]: 167 167 Oct 5 03:44:44 localhost podman[28897]: 2025-10-05 07:44:44.731216813 +0000 UTC m=+0.150922305 container died 10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_nobel, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:44 localhost podman[28918]: 2025-10-05 07:44:44.817974087 +0000 UTC m=+0.077366514 container remove 10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_nobel, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:44:44 localhost systemd[1]: libpod-conmon-10b2095f2e14dacac912d170c915c8a6b8b66a4b26c789de5bc27f3f0b4a773b.scope: Deactivated successfully. Oct 5 03:44:45 localhost podman[28940]: Oct 5 03:44:45 localhost podman[28940]: 2025-10-05 07:44:45.02483725 +0000 UTC m=+0.070918561 container create 5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hoover, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, release=553, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:44:45 localhost systemd[1]: Started libpod-conmon-5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c.scope. Oct 5 03:44:45 localhost systemd[1]: Started libcrun container. Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c299621da59d95fab454d0113411b2c7d6d8adb19b28768af697f031fb8d040/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost podman[28940]: 2025-10-05 07:44:44.996967013 +0000 UTC m=+0.043048314 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c299621da59d95fab454d0113411b2c7d6d8adb19b28768af697f031fb8d040/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c299621da59d95fab454d0113411b2c7d6d8adb19b28768af697f031fb8d040/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c299621da59d95fab454d0113411b2c7d6d8adb19b28768af697f031fb8d040/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c299621da59d95fab454d0113411b2c7d6d8adb19b28768af697f031fb8d040/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:45 localhost podman[28940]: 2025-10-05 07:44:45.148027011 +0000 UTC m=+0.194108312 container init 5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hoover, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Oct 5 03:44:45 localhost podman[28940]: 2025-10-05 07:44:45.157017202 +0000 UTC m=+0.203098503 container start 5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hoover, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main) Oct 5 03:44:45 localhost podman[28940]: 2025-10-05 07:44:45.157452293 +0000 UTC m=+0.203533594 container attach 5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hoover, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Oct 5 03:44:45 localhost determined_hoover[28956]: --> passed data devices: 0 physical, 2 LVM Oct 5 03:44:45 localhost determined_hoover[28956]: --> relative data size: 1.0 Oct 5 03:44:45 localhost systemd[1]: var-lib-containers-storage-overlay-a9bfae650932d899490c4d54d57de3bea0dfa5c2ccce6c78e769f649e8675c6c-merged.mount: Deactivated successfully. Oct 5 03:44:45 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:45 localhost determined_hoover[28956]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1b959220-4400-4994-90f2-14032cbb3197 Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:46 localhost lvm[29010]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 03:44:46 localhost lvm[29010]: VG ceph_vg0 finished Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap Oct 5 03:44:46 localhost determined_hoover[28956]: stderr: got monmap epoch 3 Oct 5 03:44:46 localhost determined_hoover[28956]: --> Creating keyring file for osd.0 Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Oct 5 03:44:46 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1b959220-4400-4994-90f2-14032cbb3197 --setuser ceph --setgroup ceph Oct 5 03:44:49 localhost determined_hoover[28956]: stderr: 2025-10-05T07:44:46.815+0000 7ffab33a9a80 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Oct 5 03:44:49 localhost determined_hoover[28956]: stderr: 2025-10-05T07:44:46.815+0000 7ffab33a9a80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid Oct 5 03:44:49 localhost determined_hoover[28956]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Oct 5 03:44:49 localhost determined_hoover[28956]: --> ceph-volume lvm activate successful for osd ID: 0 Oct 5 03:44:49 localhost determined_hoover[28956]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:49 localhost determined_hoover[28956]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 86a0a5d0-5e12-45dd-860e-409d6f08bd43 Oct 5 03:44:49 localhost lvm[29940]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 03:44:49 localhost lvm[29940]: VG ceph_vg1 finished Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-authtool --gen-print-key Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3 Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap Oct 5 03:44:50 localhost determined_hoover[28956]: stderr: got monmap epoch 3 Oct 5 03:44:50 localhost determined_hoover[28956]: --> Creating keyring file for osd.3 Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/ Oct 5 03:44:50 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 86a0a5d0-5e12-45dd-860e-409d6f08bd43 --setuser ceph --setgroup ceph Oct 5 03:44:53 localhost determined_hoover[28956]: stderr: 2025-10-05T07:44:50.686+0000 7f260c0b5a80 -1 bluestore(/var/lib/ceph/osd/ceph-3//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Oct 5 03:44:53 localhost determined_hoover[28956]: stderr: 2025-10-05T07:44:50.686+0000 7f260c0b5a80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid Oct 5 03:44:53 localhost determined_hoover[28956]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Oct 5 03:44:53 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Oct 5 03:44:53 localhost determined_hoover[28956]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-3 --no-mon-config Oct 5 03:44:53 localhost determined_hoover[28956]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Oct 5 03:44:53 localhost determined_hoover[28956]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block Oct 5 03:44:53 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:44:53 localhost determined_hoover[28956]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Oct 5 03:44:53 localhost determined_hoover[28956]: --> ceph-volume lvm activate successful for osd ID: 3 Oct 5 03:44:53 localhost determined_hoover[28956]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Oct 5 03:44:53 localhost systemd[1]: libpod-5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c.scope: Deactivated successfully. Oct 5 03:44:53 localhost podman[28940]: 2025-10-05 07:44:53.279708176 +0000 UTC m=+8.325789477 container died 5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hoover, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, release=553) Oct 5 03:44:53 localhost systemd[1]: libpod-5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c.scope: Consumed 3.719s CPU time. Oct 5 03:44:53 localhost systemd[1]: var-lib-containers-storage-overlay-7c299621da59d95fab454d0113411b2c7d6d8adb19b28768af697f031fb8d040-merged.mount: Deactivated successfully. Oct 5 03:44:53 localhost podman[30839]: 2025-10-05 07:44:53.40266894 +0000 UTC m=+0.070716526 container remove 5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hoover, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True) Oct 5 03:44:53 localhost systemd[1]: libpod-conmon-5d1b57bd5edf05ebfb0a60fbcea92b9f997b0ca6b9868c75ab02623803f4c54c.scope: Deactivated successfully. Oct 5 03:44:54 localhost podman[30918]: Oct 5 03:44:54 localhost podman[30918]: 2025-10-05 07:44:54.135165226 +0000 UTC m=+0.063680838 container create 45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_shannon, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Oct 5 03:44:54 localhost systemd[1]: Started libpod-conmon-45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105.scope. Oct 5 03:44:54 localhost systemd[1]: Started libcrun container. Oct 5 03:44:54 localhost podman[30918]: 2025-10-05 07:44:54.104921115 +0000 UTC m=+0.033436757 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:54 localhost podman[30918]: 2025-10-05 07:44:54.210458833 +0000 UTC m=+0.138974435 container init 45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_shannon, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55) Oct 5 03:44:54 localhost podman[30918]: 2025-10-05 07:44:54.220014039 +0000 UTC m=+0.148529641 container start 45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_shannon, release=553, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:44:54 localhost podman[30918]: 2025-10-05 07:44:54.220220874 +0000 UTC m=+0.148736476 container attach 45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_shannon, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-type=git, version=7, distribution-scope=public, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main) Oct 5 03:44:54 localhost stupefied_shannon[30933]: 167 167 Oct 5 03:44:54 localhost systemd[1]: libpod-45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105.scope: Deactivated successfully. Oct 5 03:44:54 localhost podman[30918]: 2025-10-05 07:44:54.223305027 +0000 UTC m=+0.151820689 container died 45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_shannon, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=) Oct 5 03:44:54 localhost podman[30938]: 2025-10-05 07:44:54.308466839 +0000 UTC m=+0.076697276 container remove 45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_shannon, ceph=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main) Oct 5 03:44:54 localhost systemd[1]: libpod-conmon-45954e1c234eeea3410a0ae89d6cd3166c0f7edc33532deac81958debe9ec105.scope: Deactivated successfully. Oct 5 03:44:54 localhost systemd[1]: var-lib-containers-storage-overlay-86a542221cf563af09da875f3e9aeeec66eca23d970e40ce5e9d10f0937b5b3f-merged.mount: Deactivated successfully. Oct 5 03:44:54 localhost podman[30959]: Oct 5 03:44:54 localhost podman[30959]: 2025-10-05 07:44:54.500116924 +0000 UTC m=+0.075294679 container create 59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_dewdney, maintainer=Guillaume Abrioux , version=7, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=553, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7) Oct 5 03:44:54 localhost systemd[1]: Started libpod-conmon-59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de.scope. Oct 5 03:44:54 localhost systemd[1]: Started libcrun container. Oct 5 03:44:54 localhost podman[30959]: 2025-10-05 07:44:54.468678342 +0000 UTC m=+0.043856157 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bb4d21e775f1498143b12f951a94d5b197b22eb3babaffb7efcf5dac555246/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bb4d21e775f1498143b12f951a94d5b197b22eb3babaffb7efcf5dac555246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bb4d21e775f1498143b12f951a94d5b197b22eb3babaffb7efcf5dac555246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:54 localhost podman[30959]: 2025-10-05 07:44:54.604372307 +0000 UTC m=+0.179550042 container init 59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_dewdney, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 03:44:54 localhost podman[30959]: 2025-10-05 07:44:54.614849678 +0000 UTC m=+0.190027443 container start 59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_dewdney, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, RELEASE=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:44:54 localhost podman[30959]: 2025-10-05 07:44:54.615147466 +0000 UTC m=+0.190325211 container attach 59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_dewdney, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-type=git) Oct 5 03:44:54 localhost awesome_dewdney[30974]: { Oct 5 03:44:54 localhost awesome_dewdney[30974]: "0": [ Oct 5 03:44:54 localhost awesome_dewdney[30974]: { Oct 5 03:44:54 localhost awesome_dewdney[30974]: "devices": [ Oct 5 03:44:54 localhost awesome_dewdney[30974]: "/dev/loop3" Oct 5 03:44:54 localhost awesome_dewdney[30974]: ], Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_name": "ceph_lv0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_size": "7511998464", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=F29qDM-1dgf-5fB9-Cbhv-cEvI-2Y8v-PeHtrU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=659062ac-50b4-5607-b699-3105da7f55ee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1b959220-4400-4994-90f2-14032cbb3197,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_uuid": "F29qDM-1dgf-5fB9-Cbhv-cEvI-2Y8v-PeHtrU", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "name": "ceph_lv0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "path": "/dev/ceph_vg0/ceph_lv0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "tags": { Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.block_uuid": "F29qDM-1dgf-5fB9-Cbhv-cEvI-2Y8v-PeHtrU", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.cephx_lockbox_secret": "", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.cluster_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.cluster_name": "ceph", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.crush_device_class": "", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.encrypted": "0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.osd_fsid": "1b959220-4400-4994-90f2-14032cbb3197", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.osd_id": "0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.osdspec_affinity": "default_drive_group", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.type": "block", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.vdo": "0" Oct 5 03:44:54 localhost awesome_dewdney[30974]: }, Oct 5 03:44:54 localhost awesome_dewdney[30974]: "type": "block", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "vg_name": "ceph_vg0" Oct 5 03:44:54 localhost awesome_dewdney[30974]: } Oct 5 03:44:54 localhost awesome_dewdney[30974]: ], Oct 5 03:44:54 localhost awesome_dewdney[30974]: "3": [ Oct 5 03:44:54 localhost awesome_dewdney[30974]: { Oct 5 03:44:54 localhost awesome_dewdney[30974]: "devices": [ Oct 5 03:44:54 localhost awesome_dewdney[30974]: "/dev/loop4" Oct 5 03:44:54 localhost awesome_dewdney[30974]: ], Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_name": "ceph_lv1", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_size": "7511998464", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=MhzIs0-Hctk-BzB2-LaX2-2uHl-LEw6-UdqgWt,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=659062ac-50b4-5607-b699-3105da7f55ee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=86a0a5d0-5e12-45dd-860e-409d6f08bd43,ceph.osd_id=3,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "lv_uuid": "MhzIs0-Hctk-BzB2-LaX2-2uHl-LEw6-UdqgWt", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "name": "ceph_lv1", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "path": "/dev/ceph_vg1/ceph_lv1", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "tags": { Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.block_uuid": "MhzIs0-Hctk-BzB2-LaX2-2uHl-LEw6-UdqgWt", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.cephx_lockbox_secret": "", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.cluster_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.cluster_name": "ceph", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.crush_device_class": "", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.encrypted": "0", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.osd_fsid": "86a0a5d0-5e12-45dd-860e-409d6f08bd43", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.osd_id": "3", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.osdspec_affinity": "default_drive_group", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.type": "block", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "ceph.vdo": "0" Oct 5 03:44:54 localhost awesome_dewdney[30974]: }, Oct 5 03:44:54 localhost awesome_dewdney[30974]: "type": "block", Oct 5 03:44:54 localhost awesome_dewdney[30974]: "vg_name": "ceph_vg1" Oct 5 03:44:54 localhost awesome_dewdney[30974]: } Oct 5 03:44:54 localhost awesome_dewdney[30974]: ] Oct 5 03:44:54 localhost awesome_dewdney[30974]: } Oct 5 03:44:54 localhost systemd[1]: libpod-59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de.scope: Deactivated successfully. Oct 5 03:44:55 localhost podman[30983]: 2025-10-05 07:44:55.007429116 +0000 UTC m=+0.040784954 container died 59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_dewdney, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64) Oct 5 03:44:55 localhost podman[30983]: 2025-10-05 07:44:55.041898231 +0000 UTC m=+0.075254028 container remove 59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_dewdney, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-09-24T08:57:55, distribution-scope=public, version=7, vcs-type=git, release=553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 03:44:55 localhost systemd[1]: libpod-conmon-59ec142187449e887b157b31f048ca0b4d2c702b849f227c72b65379e88559de.scope: Deactivated successfully. Oct 5 03:44:55 localhost systemd[1]: var-lib-containers-storage-overlay-44bb4d21e775f1498143b12f951a94d5b197b22eb3babaffb7efcf5dac555246-merged.mount: Deactivated successfully. Oct 5 03:44:55 localhost podman[31070]: Oct 5 03:44:55 localhost podman[31070]: 2025-10-05 07:44:55.82117979 +0000 UTC m=+0.057874392 container create b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mahavira, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Oct 5 03:44:55 localhost systemd[1]: Started libpod-conmon-b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15.scope. Oct 5 03:44:55 localhost systemd[1]: Started libcrun container. Oct 5 03:44:55 localhost podman[31070]: 2025-10-05 07:44:55.879928694 +0000 UTC m=+0.116623296 container init b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mahavira, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-type=git, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, name=rhceph) Oct 5 03:44:55 localhost podman[31070]: 2025-10-05 07:44:55.889539891 +0000 UTC m=+0.126234503 container start b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mahavira, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Oct 5 03:44:55 localhost podman[31070]: 2025-10-05 07:44:55.889999683 +0000 UTC m=+0.126694295 container attach b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mahavira, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container) Oct 5 03:44:55 localhost upbeat_mahavira[31085]: 167 167 Oct 5 03:44:55 localhost podman[31070]: 2025-10-05 07:44:55.793017916 +0000 UTC m=+0.029712548 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:55 localhost systemd[1]: libpod-b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15.scope: Deactivated successfully. Oct 5 03:44:55 localhost podman[31070]: 2025-10-05 07:44:55.893692333 +0000 UTC m=+0.130386945 container died b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mahavira, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, version=7, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container) Oct 5 03:44:55 localhost podman[31090]: 2025-10-05 07:44:55.985489542 +0000 UTC m=+0.078256027 container remove b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_mahavira, architecture=x86_64, io.buildah.version=1.33.12, GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, RELEASE=main, name=rhceph) Oct 5 03:44:55 localhost systemd[1]: libpod-conmon-b40892e3f941f7204634710630174cab9f1456917b069c94c585b79eeb1d3e15.scope: Deactivated successfully. Oct 5 03:44:56 localhost podman[31120]: Oct 5 03:44:56 localhost podman[31120]: 2025-10-05 07:44:56.304650833 +0000 UTC m=+0.071451515 container create 456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container) Oct 5 03:44:56 localhost systemd[1]: Started libpod-conmon-456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402.scope. Oct 5 03:44:56 localhost systemd[1]: Started libcrun container. Oct 5 03:44:56 localhost podman[31120]: 2025-10-05 07:44:56.276124229 +0000 UTC m=+0.042924941 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5fd574068214e5598d515c9f0170a5ff1908f85e5b7a9ca02b8ed40e571c4d/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-a946937da2e103a48753e7953bfdcf24ca413de7dc9b9166cd1c7544db2dbd9d-merged.mount: Deactivated successfully. Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5fd574068214e5598d515c9f0170a5ff1908f85e5b7a9ca02b8ed40e571c4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5fd574068214e5598d515c9f0170a5ff1908f85e5b7a9ca02b8ed40e571c4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5fd574068214e5598d515c9f0170a5ff1908f85e5b7a9ca02b8ed40e571c4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5fd574068214e5598d515c9f0170a5ff1908f85e5b7a9ca02b8ed40e571c4d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:56 localhost podman[31120]: 2025-10-05 07:44:56.428634686 +0000 UTC m=+0.195435348 container init 456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test, build-date=2025-09-24T08:57:55, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:44:56 localhost podman[31120]: 2025-10-05 07:44:56.43958592 +0000 UTC m=+0.206386612 container start 456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:56 localhost podman[31120]: 2025-10-05 07:44:56.439828106 +0000 UTC m=+0.206628788 container attach 456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:44:56 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test[31135]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Oct 5 03:44:56 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test[31135]: [--no-systemd] [--no-tmpfs] Oct 5 03:44:56 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test[31135]: ceph-volume activate: error: unrecognized arguments: --bad-option Oct 5 03:44:56 localhost systemd[1]: libpod-456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402.scope: Deactivated successfully. Oct 5 03:44:56 localhost podman[31120]: 2025-10-05 07:44:56.662004438 +0000 UTC m=+0.428805120 container died 456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True) Oct 5 03:44:56 localhost systemd[1]: var-lib-containers-storage-overlay-5c5fd574068214e5598d515c9f0170a5ff1908f85e5b7a9ca02b8ed40e571c4d-merged.mount: Deactivated successfully. Oct 5 03:44:56 localhost systemd-journald[618]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 5 03:44:56 localhost systemd-journald[618]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 03:44:56 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:44:56 localhost podman[31140]: 2025-10-05 07:44:56.759221294 +0000 UTC m=+0.083863388 container remove 456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate-test, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , version=7) Oct 5 03:44:56 localhost systemd[1]: libpod-conmon-456ec865cf132c1be1f578db928f90f462ed5f50d094bf1589c35bb10a7b7402.scope: Deactivated successfully. Oct 5 03:44:56 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:44:57 localhost systemd[1]: Reloading. Oct 5 03:44:57 localhost systemd-rc-local-generator[31201]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:57 localhost systemd-sysv-generator[31204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:57 localhost systemd[1]: Reloading. Oct 5 03:44:57 localhost systemd-rc-local-generator[31238]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:44:57 localhost systemd-sysv-generator[31243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:44:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:44:57 localhost systemd[1]: Starting Ceph osd.0 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 03:44:57 localhost podman[31303]: Oct 5 03:44:57 localhost podman[31303]: 2025-10-05 07:44:57.939687852 +0000 UTC m=+0.076619664 container create f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Oct 5 03:44:57 localhost systemd[1]: Started libcrun container. Oct 5 03:44:58 localhost podman[31303]: 2025-10-05 07:44:57.907109219 +0000 UTC m=+0.044041051 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddbe7aa4ce243dcd82d5a366d0c7e8f6012ae5120d06218d12cac617d05b4c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddbe7aa4ce243dcd82d5a366d0c7e8f6012ae5120d06218d12cac617d05b4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddbe7aa4ce243dcd82d5a366d0c7e8f6012ae5120d06218d12cac617d05b4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddbe7aa4ce243dcd82d5a366d0c7e8f6012ae5120d06218d12cac617d05b4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddbe7aa4ce243dcd82d5a366d0c7e8f6012ae5120d06218d12cac617d05b4c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:58 localhost podman[31303]: 2025-10-05 07:44:58.066183701 +0000 UTC m=+0.203115483 container init f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, release=553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12) Oct 5 03:44:58 localhost podman[31303]: 2025-10-05 07:44:58.07474435 +0000 UTC m=+0.211676162 container start f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7) Oct 5 03:44:58 localhost podman[31303]: 2025-10-05 07:44:58.075080869 +0000 UTC m=+0.212012681 container attach f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Oct 5 03:44:58 localhost bash[31303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost bash[31303]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost bash[31303]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:58 localhost bash[31303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:58 localhost bash[31303]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Oct 5 03:44:58 localhost bash[31303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Oct 5 03:44:58 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate[31317]: --> ceph-volume raw activate successful for osd ID: 0 Oct 5 03:44:58 localhost bash[31303]: --> ceph-volume raw activate successful for osd ID: 0 Oct 5 03:44:58 localhost systemd[1]: libpod-f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51.scope: Deactivated successfully. Oct 5 03:44:58 localhost podman[31303]: 2025-10-05 07:44:58.812449797 +0000 UTC m=+0.949381599 container died f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:44:58 localhost podman[31446]: 2025-10-05 07:44:58.903057334 +0000 UTC m=+0.077549549 container remove f8fb0df920a596eba86941400eea21c96d13e5a0135c23b70cc07413b5e9ce51 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0-activate, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., ceph=True, version=7) Oct 5 03:44:58 localhost systemd[1]: var-lib-containers-storage-overlay-d1ddbe7aa4ce243dcd82d5a366d0c7e8f6012ae5120d06218d12cac617d05b4c-merged.mount: Deactivated successfully. Oct 5 03:44:59 localhost podman[31505]: Oct 5 03:44:59 localhost podman[31505]: 2025-10-05 07:44:59.215377941 +0000 UTC m=+0.070618992 container create 4cd23341a99963a1ca640449d883d0df3451945edfb28d73e52ccd2dc961c281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Oct 5 03:44:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cb4a4ab09f492d16faa43725b94505aa47cc19e11f1ab928e593d7120a31a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cb4a4ab09f492d16faa43725b94505aa47cc19e11f1ab928e593d7120a31a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost podman[31505]: 2025-10-05 07:44:59.187149195 +0000 UTC m=+0.042390246 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:44:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cb4a4ab09f492d16faa43725b94505aa47cc19e11f1ab928e593d7120a31a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cb4a4ab09f492d16faa43725b94505aa47cc19e11f1ab928e593d7120a31a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5cb4a4ab09f492d16faa43725b94505aa47cc19e11f1ab928e593d7120a31a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Oct 5 03:44:59 localhost podman[31505]: 2025-10-05 07:44:59.326586221 +0000 UTC m=+0.181827282 container init 4cd23341a99963a1ca640449d883d0df3451945edfb28d73e52ccd2dc961c281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Oct 5 03:44:59 localhost podman[31505]: 2025-10-05 07:44:59.333499977 +0000 UTC m=+0.188741028 container start 4cd23341a99963a1ca640449d883d0df3451945edfb28d73e52ccd2dc961c281 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, name=rhceph, io.buildah.version=1.33.12, ceph=True, maintainer=Guillaume Abrioux , release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container) Oct 5 03:44:59 localhost bash[31505]: 4cd23341a99963a1ca640449d883d0df3451945edfb28d73e52ccd2dc961c281 Oct 5 03:44:59 localhost systemd[1]: Started Ceph osd.0 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:44:59 localhost ceph-osd[31524]: set uid:gid to 167:167 (ceph:ceph) Oct 5 03:44:59 localhost ceph-osd[31524]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Oct 5 03:44:59 localhost ceph-osd[31524]: pidfile_write: ignore empty --pid-file Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31524]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) close Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) close Oct 5 03:44:59 localhost ceph-osd[31524]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal Oct 5 03:44:59 localhost ceph-osd[31524]: load: jerasure load: lrc Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:44:59 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:44:59 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) close Oct 5 03:45:00 localhost podman[31615]: Oct 5 03:45:00 localhost podman[31615]: 2025-10-05 07:45:00.145498783 +0000 UTC m=+0.071294311 container create 993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_johnson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph) Oct 5 03:45:00 localhost systemd[1]: Started libpod-conmon-993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34.scope. Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) close Oct 5 03:45:00 localhost systemd[1]: Started libcrun container. Oct 5 03:45:00 localhost podman[31615]: 2025-10-05 07:45:00.116492326 +0000 UTC m=+0.042287854 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:00 localhost podman[31615]: 2025-10-05 07:45:00.226517533 +0000 UTC m=+0.152313071 container init 993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_johnson, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55) Oct 5 03:45:00 localhost podman[31615]: 2025-10-05 07:45:00.236742858 +0000 UTC m=+0.162538386 container start 993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_johnson, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:45:00 localhost podman[31615]: 2025-10-05 07:45:00.237067676 +0000 UTC m=+0.162863254 container attach 993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_johnson, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True) Oct 5 03:45:00 localhost objective_johnson[31630]: 167 167 Oct 5 03:45:00 localhost systemd[1]: libpod-993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34.scope: Deactivated successfully. Oct 5 03:45:00 localhost podman[31615]: 2025-10-05 07:45:00.240978482 +0000 UTC m=+0.166774000 container died 993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_johnson, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, name=rhceph, release=553, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Oct 5 03:45:00 localhost systemd[1]: var-lib-containers-storage-overlay-da55894a96362411e9894a910d11bfedbe9fb2677ff656a58e3ee1acb96b53c1-merged.mount: Deactivated successfully. Oct 5 03:45:00 localhost podman[31639]: 2025-10-05 07:45:00.339012118 +0000 UTC m=+0.085266186 container remove 993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_johnson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Oct 5 03:45:00 localhost systemd[1]: libpod-conmon-993bb2c623a2de45820da785fa4e7ac0a247917864643ad9ea385cebcb9b9f34.scope: Deactivated successfully. Oct 5 03:45:00 localhost ceph-osd[31524]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs mount Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs mount shared_bdev_used = 0 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Git sha 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DB SUMMARY Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DB Session ID: C6CFWI0J6B5SMHIREARS Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.env: 0x564cf467ecb0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.info_log: 0x564cf5368780 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.statistics: (nil) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.db_log_dir: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_manager: 0x564cf43d4140 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.row_cache: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Compression algorithms supported: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5368b60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 63dd3b69-7ef9-4853-99a3-ec35b57a3780 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300521921, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300522690, "job": 1, "event": "recovery_finished"} Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000 Oct 5 03:45:00 localhost ceph-osd[31524]: freelist init Oct 5 03:45:00 localhost ceph-osd[31524]: freelist _read_cfg Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs umount Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) close Oct 5 03:45:00 localhost podman[31861]: Oct 5 03:45:00 localhost podman[31861]: 2025-10-05 07:45:00.679596613 +0000 UTC m=+0.080282742 container create 3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:45:00 localhost systemd[1]: Started libpod-conmon-3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80.scope. Oct 5 03:45:00 localhost systemd[1]: Started libcrun container. Oct 5 03:45:00 localhost podman[31861]: 2025-10-05 07:45:00.650391941 +0000 UTC m=+0.051078100 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debd3040bb96ca4bab444001ba5bb19e4d5120309fa1ceb22895f32a4c9fc5c8/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Oct 5 03:45:00 localhost ceph-osd[31524]: bdev(0x564cf43eb180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs mount Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:00 localhost ceph-osd[31524]: bluefs mount shared_bdev_used = 4718592 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debd3040bb96ca4bab444001ba5bb19e4d5120309fa1ceb22895f32a4c9fc5c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Git sha 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DB SUMMARY Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DB Session ID: C6CFWI0J6B5SMHIREART Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.env: 0x564cf4512a80 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.info_log: 0x564cf5368d40 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.statistics: (nil) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.db_log_dir: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_manager: 0x564cf43d55e0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.row_cache: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Compression algorithms supported: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debd3040bb96ca4bab444001ba5bb19e4d5120309fa1ceb22895f32a4c9fc5c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426e00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426e00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debd3040bb96ca4bab444001ba5bb19e4d5120309fa1ceb22895f32a4c9fc5c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.merge_operator: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564cf5426e00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x564cf43c3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression: LZ4 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.num_levels: 7 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 63dd3b69-7ef9-4853-99a3-ec35b57a3780 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300790937, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debd3040bb96ca4bab444001ba5bb19e4d5120309fa1ceb22895f32a4c9fc5c8/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300807328, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650300, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63dd3b69-7ef9-4853-99a3-ec35b57a3780", "db_session_id": "C6CFWI0J6B5SMHIREART", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:00 localhost podman[31861]: 2025-10-05 07:45:00.810512921 +0000 UTC m=+0.211199030 container init 3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, release=553, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300811540, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650300, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63dd3b69-7ef9-4853-99a3-ec35b57a3780", "db_session_id": "C6CFWI0J6B5SMHIREART", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300815658, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650300, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63dd3b69-7ef9-4853-99a3-ec35b57a3780", "db_session_id": "C6CFWI0J6B5SMHIREART", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650300819639, "job": 1, "event": "recovery_finished"} Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Oct 5 03:45:00 localhost podman[31861]: 2025-10-05 07:45:00.821326671 +0000 UTC m=+0.222012810 container start 3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64) Oct 5 03:45:00 localhost podman[31861]: 2025-10-05 07:45:00.821591528 +0000 UTC m=+0.222277697 container attach 3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564cf5444380 Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: DB pointer 0x564cf52bfa00 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4 Oct 5 03:45:00 localhost ceph-osd[31524]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 460.80 MB usag Oct 5 03:45:00 localhost ceph-osd[31524]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Oct 5 03:45:00 localhost ceph-osd[31524]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Oct 5 03:45:00 localhost ceph-osd[31524]: _get_class not permitted to load lua Oct 5 03:45:00 localhost ceph-osd[31524]: _get_class not permitted to load sdk Oct 5 03:45:00 localhost ceph-osd[31524]: _get_class not permitted to load test_remote_reads Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 load_pgs Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 load_pgs opened 0 pgs Oct 5 03:45:00 localhost ceph-osd[31524]: osd.0 0 log_to_monitors true Oct 5 03:45:00 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0[31520]: 2025-10-05T07:45:00.860+0000 7f9e80335a80 -1 osd.0 0 log_to_monitors true Oct 5 03:45:01 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test[31876]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Oct 5 03:45:01 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test[31876]: [--no-systemd] [--no-tmpfs] Oct 5 03:45:01 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test[31876]: ceph-volume activate: error: unrecognized arguments: --bad-option Oct 5 03:45:01 localhost systemd[1]: libpod-3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80.scope: Deactivated successfully. Oct 5 03:45:01 localhost podman[31861]: 2025-10-05 07:45:01.07398203 +0000 UTC m=+0.474668159 container died 3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test, version=7, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 03:45:01 localhost podman[32096]: 2025-10-05 07:45:01.162932324 +0000 UTC m=+0.077086637 container remove 3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate-test, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, release=553, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Oct 5 03:45:01 localhost systemd[1]: libpod-conmon-3f917d7f112ba402eaff8c7101050bc146331172094b9de98124e3ef9cb24b80.scope: Deactivated successfully. Oct 5 03:45:01 localhost systemd[1]: Reloading. Oct 5 03:45:01 localhost systemd-rc-local-generator[32151]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:45:01 localhost systemd-sysv-generator[32157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:45:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:45:01 localhost systemd[1]: Reloading. Oct 5 03:45:01 localhost systemd-sysv-generator[32196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:45:01 localhost systemd-rc-local-generator[32193]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:45:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:45:01 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Oct 5 03:45:01 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Oct 5 03:45:01 localhost systemd[1]: Starting Ceph osd.3 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 done with init, starting boot process Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 start_boot Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1 Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Oct 5 03:45:02 localhost ceph-osd[31524]: osd.0 0 bench count 12288000 bsize 4 KiB Oct 5 03:45:02 localhost podman[32259]: Oct 5 03:45:02 localhost podman[32259]: 2025-10-05 07:45:02.193111825 +0000 UTC m=+0.060227494 container create 70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate, io.openshift.expose-services=, name=rhceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container) Oct 5 03:45:02 localhost systemd[1]: Started libcrun container. Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b992bad6a69f661c912ed7a1a2cec377ca3eeb018d67a626adee0b5490c5e05/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost podman[32259]: 2025-10-05 07:45:02.171324452 +0000 UTC m=+0.038440091 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b992bad6a69f661c912ed7a1a2cec377ca3eeb018d67a626adee0b5490c5e05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b992bad6a69f661c912ed7a1a2cec377ca3eeb018d67a626adee0b5490c5e05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b992bad6a69f661c912ed7a1a2cec377ca3eeb018d67a626adee0b5490c5e05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b992bad6a69f661c912ed7a1a2cec377ca3eeb018d67a626adee0b5490c5e05/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:02 localhost podman[32259]: 2025-10-05 07:45:02.337053752 +0000 UTC m=+0.204169411 container init 70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, version=7, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, release=553, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True) Oct 5 03:45:02 localhost podman[32259]: 2025-10-05 07:45:02.359986887 +0000 UTC m=+0.227102556 container start 70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., version=7, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, name=rhceph) Oct 5 03:45:02 localhost podman[32259]: 2025-10-05 07:45:02.360304825 +0000 UTC m=+0.227420544 container attach 70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, version=7, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main) Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Oct 5 03:45:02 localhost bash[32259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost bash[32259]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:02 localhost bash[32259]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Oct 5 03:45:03 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:45:03 localhost bash[32259]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Oct 5 03:45:03 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:03 localhost bash[32259]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:03 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Oct 5 03:45:03 localhost bash[32259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Oct 5 03:45:03 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate[32274]: --> ceph-volume raw activate successful for osd ID: 3 Oct 5 03:45:03 localhost bash[32259]: --> ceph-volume raw activate successful for osd ID: 3 Oct 5 03:45:03 localhost systemd[1]: libpod-70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a.scope: Deactivated successfully. Oct 5 03:45:03 localhost podman[32390]: 2025-10-05 07:45:03.117482162 +0000 UTC m=+0.054210693 container died 70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, release=553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Oct 5 03:45:03 localhost podman[32390]: 2025-10-05 07:45:03.191752623 +0000 UTC m=+0.128481114 container remove 70747f86dac0e9d580f1e207a8aaf80849684ec5f0a590dd1b11b3596605117a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3-activate, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, version=7, description=Red Hat Ceph Storage 7) Oct 5 03:45:03 localhost systemd[1]: tmp-crun.n8Kg4j.mount: Deactivated successfully. Oct 5 03:45:03 localhost systemd[1]: var-lib-containers-storage-overlay-9b992bad6a69f661c912ed7a1a2cec377ca3eeb018d67a626adee0b5490c5e05-merged.mount: Deactivated successfully. Oct 5 03:45:03 localhost podman[32450]: Oct 5 03:45:03 localhost podman[32450]: 2025-10-05 07:45:03.552026555 +0000 UTC m=+0.074230919 container create 751e0e551185742b1f1dcd947288e6b4d23a199c35d787086c099ba49e07ef1f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3, CEPH_POINT_RELEASE=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, release=553, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, ceph=True, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:45:03 localhost systemd[1]: tmp-crun.O0fw4G.mount: Deactivated successfully. Oct 5 03:45:03 localhost podman[32450]: 2025-10-05 07:45:03.522047982 +0000 UTC m=+0.044252386 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bad49835165c55fcb532d7fde3e5b6e1c28ffa593abea724bbb05024a53182a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bad49835165c55fcb532d7fde3e5b6e1c28ffa593abea724bbb05024a53182a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bad49835165c55fcb532d7fde3e5b6e1c28ffa593abea724bbb05024a53182a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bad49835165c55fcb532d7fde3e5b6e1c28ffa593abea724bbb05024a53182a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bad49835165c55fcb532d7fde3e5b6e1c28ffa593abea724bbb05024a53182a/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:03 localhost podman[32450]: 2025-10-05 07:45:03.69741504 +0000 UTC m=+0.219619394 container init 751e0e551185742b1f1dcd947288e6b4d23a199c35d787086c099ba49e07ef1f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7) Oct 5 03:45:03 localhost podman[32450]: 2025-10-05 07:45:03.722305387 +0000 UTC m=+0.244509741 container start 751e0e551185742b1f1dcd947288e6b4d23a199c35d787086c099ba49e07ef1f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_CLEAN=True) Oct 5 03:45:03 localhost bash[32450]: 751e0e551185742b1f1dcd947288e6b4d23a199c35d787086c099ba49e07ef1f Oct 5 03:45:03 localhost systemd[1]: Started Ceph osd.3 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 03:45:03 localhost ceph-osd[32468]: set uid:gid to 167:167 (ceph:ceph) Oct 5 03:45:03 localhost ceph-osd[32468]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Oct 5 03:45:03 localhost ceph-osd[32468]: pidfile_write: ignore empty --pid-file Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:03 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:03 localhost ceph-osd[32468]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Oct 5 03:45:03 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) close Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) close Oct 5 03:45:04 localhost ceph-osd[32468]: starting osd.3 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal Oct 5 03:45:04 localhost ceph-osd[32468]: load: jerasure load: lrc Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) close Oct 5 03:45:04 localhost podman[32558]: Oct 5 03:45:04 localhost podman[32558]: 2025-10-05 07:45:04.548549465 +0000 UTC m=+0.083465737 container create 822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_margulis, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=553, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7) Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) close Oct 5 03:45:04 localhost systemd[1]: Started libpod-conmon-822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b.scope. Oct 5 03:45:04 localhost podman[32558]: 2025-10-05 07:45:04.507772213 +0000 UTC m=+0.042688505 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:04 localhost systemd[1]: Started libcrun container. Oct 5 03:45:04 localhost podman[32558]: 2025-10-05 07:45:04.631547469 +0000 UTC m=+0.166463741 container init 822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_margulis, architecture=x86_64, io.buildah.version=1.33.12, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=553, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 03:45:04 localhost optimistic_margulis[32578]: 167 167 Oct 5 03:45:04 localhost systemd[1]: libpod-822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b.scope: Deactivated successfully. Oct 5 03:45:04 localhost podman[32558]: 2025-10-05 07:45:04.663281569 +0000 UTC m=+0.198197841 container start 822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_margulis, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7) Oct 5 03:45:04 localhost podman[32558]: 2025-10-05 07:45:04.663524316 +0000 UTC m=+0.198440588 container attach 822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_margulis, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 03:45:04 localhost podman[32558]: 2025-10-05 07:45:04.666534126 +0000 UTC m=+0.201450408 container died 822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_margulis, architecture=x86_64, io.openshift.expose-services=, version=7, release=553, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 03:45:04 localhost podman[32583]: 2025-10-05 07:45:04.767528052 +0000 UTC m=+0.109517715 container remove 822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_margulis, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, release=553, vcs-type=git) Oct 5 03:45:04 localhost systemd[1]: libpod-conmon-822f1dda340997fd938b9a2610f599cb307fc88fe94f79a1b26e69c0f3fb057b.scope: Deactivated successfully. Oct 5 03:45:04 localhost ceph-osd[32468]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Oct 5 03:45:04 localhost ceph-osd[32468]: osd.3:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af42e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:04 localhost ceph-osd[32468]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Oct 5 03:45:04 localhost ceph-osd[32468]: bluefs mount Oct 5 03:45:04 localhost ceph-osd[32468]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:04 localhost ceph-osd[32468]: bluefs mount shared_bdev_used = 0 Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Git sha 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: DB SUMMARY Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: DB Session ID: NUVQ05973K0TJ5ECPLCK Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.env: 0x55656b1d6cb0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.info_log: 0x55656bec0340 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.statistics: (nil) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.db_log_dir: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_manager: 0x55656af2c140 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.row_cache: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.wal_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Compression algorithms supported: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0500)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656bec0720)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0bffd29d-fb39-49ca-9d12-704daa196949 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650304859376, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650304859632, "job": 1, "event": "recovery_finished"} Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old nid_max 1025 Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old blobid_max 10240 Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta min_alloc_size 0x1000 Oct 5 03:45:04 localhost ceph-osd[32468]: freelist init Oct 5 03:45:04 localhost ceph-osd[32468]: freelist _read_cfg Oct 5 03:45:04 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Oct 5 03:45:04 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Oct 5 03:45:04 localhost ceph-osd[32468]: bluefs umount Oct 5 03:45:04 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) close Oct 5 03:45:04 localhost podman[32799]: Oct 5 03:45:04 localhost podman[32799]: 2025-10-05 07:45:04.997452903 +0000 UTC m=+0.077508437 container create 6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_dhawan, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:45:05 localhost systemd[1]: Started libpod-conmon-6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6.scope. Oct 5 03:45:05 localhost systemd[1]: Started libcrun container. Oct 5 03:45:05 localhost podman[32799]: 2025-10-05 07:45:04.959238229 +0000 UTC m=+0.039293783 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b43acc358b7c76134782e5c9477eeb03b1de01c80a28d7e70f4986a6a66d8df/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b43acc358b7c76134782e5c9477eeb03b1de01c80a28d7e70f4986a6a66d8df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b43acc358b7c76134782e5c9477eeb03b1de01c80a28d7e70f4986a6a66d8df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:05 localhost podman[32799]: 2025-10-05 07:45:05.115850206 +0000 UTC m=+0.195905770 container init 6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_dhawan, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , release=553) Oct 5 03:45:05 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Oct 5 03:45:05 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Oct 5 03:45:05 localhost ceph-osd[32468]: bdev(0x55656af43180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Oct 5 03:45:05 localhost ceph-osd[32468]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Oct 5 03:45:05 localhost ceph-osd[32468]: bluefs mount Oct 5 03:45:05 localhost ceph-osd[32468]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Oct 5 03:45:05 localhost ceph-osd[32468]: bluefs mount shared_bdev_used = 4718592 Oct 5 03:45:05 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Oct 5 03:45:05 localhost podman[32799]: 2025-10-05 07:45:05.127723584 +0000 UTC m=+0.207779088 container start 6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_dhawan, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:45:05 localhost podman[32799]: 2025-10-05 07:45:05.127916059 +0000 UTC m=+0.207971603 container attach 6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_dhawan, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: RocksDB version: 7.9.2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Git sha 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: DB SUMMARY Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: DB Session ID: NUVQ05973K0TJ5ECPLCL Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: CURRENT file: CURRENT Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: IDENTITY file: IDENTITY Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.error_if_exists: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.create_if_missing: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.env: 0x55656b06a460 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.fs: LegacyFileSystem Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.info_log: 0x55656bec1f60 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.statistics: (nil) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.use_fsync: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_log_file_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.allow_fallocate: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.use_direct_reads: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.create_missing_column_families: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.db_log_dir: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.wal_dir: db.wal Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.advise_random_on_open: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_manager: 0x55656af2d5e0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.rate_limiter: (nil) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.unordered_write: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.row_cache: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.wal_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.two_write_queues: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.manual_wal_flush: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.wal_compression: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.atomic_flush: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.log_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.db_host_id: __hostname__ Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_background_jobs: 4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_background_compactions: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_subcompactions: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.writable_file_max_buffer_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_total_wal_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_open_files: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bytes_per_sync: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_readahead_size: 2097152 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_background_flushes: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Compression algorithms supported: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kZSTD supported: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kXpressCompression supported: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kZlibCompression supported: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8f80)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8320)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8320)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.merge_operator: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_filter_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.sst_partitioner_factory: None Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55656c0a8320)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55656af1b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.write_buffer_size: 16777216 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number: 64 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression: LZ4 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression: Disabled Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.num_levels: 7 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.level: 32767 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.enabled: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.arena_block_size: 1048576 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_support: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.bloom_locality: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.max_successive_merges: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.force_consistency_checks: 1 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.ttl: 2592000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_files: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.min_blob_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_size: 268435456 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0bffd29d-fb39-49ca-9d12-704daa196949 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305172025, "job": 1, "event": "recovery_started", "wal_files": [31]} Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305183805, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bffd29d-fb39-49ca-9d12-704daa196949", "db_session_id": "NUVQ05973K0TJ5ECPLCL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305187813, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bffd29d-fb39-49ca-9d12-704daa196949", "db_session_id": "NUVQ05973K0TJ5ECPLCL", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305197753, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759650305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0bffd29d-fb39-49ca-9d12-704daa196949", "db_session_id": "NUVQ05973K0TJ5ECPLCL", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759650305202931, "job": 1, "event": "recovery_finished"} Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55656af86700 Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: DB pointer 0x55656be19a00 Oct 5 03:45:05 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Oct 5 03:45:05 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super from 4, latest 4 Oct 5 03:45:05 localhost ceph-osd[32468]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super done Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012 Oct 5 03:45:05 localhost ceph-osd[32468]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Oct 5 03:45:05 localhost ceph-osd[32468]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Oct 5 03:45:05 localhost ceph-osd[32468]: _get_class not permitted to load lua Oct 5 03:45:05 localhost ceph-osd[32468]: _get_class not permitted to load sdk Oct 5 03:45:05 localhost ceph-osd[32468]: _get_class not permitted to load test_remote_reads Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for clients Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for osds Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 load_pgs Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 load_pgs opened 0 pgs Oct 5 03:45:05 localhost ceph-osd[32468]: osd.3 0 log_to_monitors true Oct 5 03:45:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3[32464]: 2025-10-05T07:45:05.268+0000 7fc6d36c6a80 -1 osd.3 0 log_to_monitors true Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.787 iops: 6857.566 elapsed_sec: 0.437 Oct 5 03:45:05 localhost ceph-osd[31524]: log_channel(cluster) log [WRN] : OSD bench result of 6857.565565 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 0 waiting for initial osdmap Oct 5 03:45:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0[31520]: 2025-10-05T07:45:05.324+0000 7f9e7c2b4640 -1 osd.0 0 waiting for initial osdmap Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 crush map has features 288514050185494528, adjusting msgr requires for clients Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 crush map has features 3314932999778484224, adjusting msgr requires for osds Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 check_osdmap_features require_osd_release unknown -> reef Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 set_numa_affinity not setting numa affinity Oct 5 03:45:05 localhost ceph-osd[31524]: osd.0 11 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Oct 5 03:45:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-0[31520]: 2025-10-05T07:45:05.340+0000 7f9e778de640 -1 osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:05 localhost systemd[1]: var-lib-containers-storage-overlay-80bb9e38402b316552a40200d171da5370b438ee77d44534739e19bae1e81c2e-merged.mount: Deactivated successfully. Oct 5 03:45:05 localhost boring_dhawan[32815]: { Oct 5 03:45:05 localhost boring_dhawan[32815]: "1b959220-4400-4994-90f2-14032cbb3197": { Oct 5 03:45:05 localhost boring_dhawan[32815]: "ceph_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:45:05 localhost boring_dhawan[32815]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Oct 5 03:45:05 localhost boring_dhawan[32815]: "osd_id": 0, Oct 5 03:45:05 localhost boring_dhawan[32815]: "osd_uuid": "1b959220-4400-4994-90f2-14032cbb3197", Oct 5 03:45:05 localhost boring_dhawan[32815]: "type": "bluestore" Oct 5 03:45:05 localhost boring_dhawan[32815]: }, Oct 5 03:45:05 localhost boring_dhawan[32815]: "86a0a5d0-5e12-45dd-860e-409d6f08bd43": { Oct 5 03:45:05 localhost boring_dhawan[32815]: "ceph_fsid": "659062ac-50b4-5607-b699-3105da7f55ee", Oct 5 03:45:05 localhost boring_dhawan[32815]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Oct 5 03:45:05 localhost boring_dhawan[32815]: "osd_id": 3, Oct 5 03:45:05 localhost boring_dhawan[32815]: "osd_uuid": "86a0a5d0-5e12-45dd-860e-409d6f08bd43", Oct 5 03:45:05 localhost boring_dhawan[32815]: "type": "bluestore" Oct 5 03:45:05 localhost boring_dhawan[32815]: } Oct 5 03:45:05 localhost boring_dhawan[32815]: } Oct 5 03:45:05 localhost systemd[1]: libpod-6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6.scope: Deactivated successfully. Oct 5 03:45:05 localhost podman[32799]: 2025-10-05 07:45:05.753288244 +0000 UTC m=+0.833343838 container died 6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_dhawan, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux ) Oct 5 03:45:05 localhost systemd[1]: tmp-crun.zyfeWS.mount: Deactivated successfully. Oct 5 03:45:05 localhost systemd[1]: var-lib-containers-storage-overlay-4b43acc358b7c76134782e5c9477eeb03b1de01c80a28d7e70f4986a6a66d8df-merged.mount: Deactivated successfully. Oct 5 03:45:05 localhost podman[33071]: 2025-10-05 07:45:05.84380602 +0000 UTC m=+0.079343477 container remove 6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_dhawan, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.expose-services=) Oct 5 03:45:05 localhost systemd[1]: libpod-conmon-6e4eb35facc18b9ef827842b12dccf85e923b56f41fff1caccd6b0c3f68502d6.scope: Deactivated successfully. Oct 5 03:45:06 localhost ceph-osd[31524]: osd.0 12 state: booting -> active Oct 5 03:45:06 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Oct 5 03:45:06 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 done with init, starting boot process Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 start_boot Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 maybe_override_options_for_qos osd_max_backfills set to 1 Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Oct 5 03:45:07 localhost ceph-osd[32468]: osd.3 0 bench count 12288000 bsize 4 KiB Oct 5 03:45:07 localhost systemd[1]: tmp-crun.N5HO22.mount: Deactivated successfully. Oct 5 03:45:07 localhost podman[33201]: 2025-10-05 07:45:07.538913138 +0000 UTC m=+0.093069906 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, release=553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7) Oct 5 03:45:07 localhost podman[33201]: 2025-10-05 07:45:07.6438938 +0000 UTC m=+0.198050598 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True) Oct 5 03:45:08 localhost ceph-osd[31524]: osd.0 14 crush map has features 288514051259236352, adjusting msgr requires for clients Oct 5 03:45:08 localhost ceph-osd[31524]: osd.0 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Oct 5 03:45:08 localhost ceph-osd[31524]: osd.0 14 crush map has features 3314933000852226048, adjusting msgr requires for osds Oct 5 03:45:09 localhost podman[33395]: Oct 5 03:45:09 localhost podman[33395]: 2025-10-05 07:45:09.726417038 +0000 UTC m=+0.077962820 container create 3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_joliot, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, release=553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 5 03:45:09 localhost systemd[1]: Started libpod-conmon-3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90.scope. Oct 5 03:45:09 localhost systemd[1]: Started libcrun container. Oct 5 03:45:09 localhost podman[33395]: 2025-10-05 07:45:09.696289891 +0000 UTC m=+0.047835693 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:09 localhost podman[33395]: 2025-10-05 07:45:09.798724435 +0000 UTC m=+0.150270227 container init 3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_joliot, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Oct 5 03:45:09 localhost systemd[1]: tmp-crun.2IU5MC.mount: Deactivated successfully. Oct 5 03:45:09 localhost podman[33395]: 2025-10-05 07:45:09.809440422 +0000 UTC m=+0.160986174 container start 3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_joliot, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , version=7, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, distribution-scope=public) Oct 5 03:45:09 localhost podman[33395]: 2025-10-05 07:45:09.810932933 +0000 UTC m=+0.162478775 container attach 3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_joliot, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., release=553, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 03:45:09 localhost nervous_joliot[33410]: 167 167 Oct 5 03:45:09 localhost systemd[1]: libpod-3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90.scope: Deactivated successfully. Oct 5 03:45:09 localhost podman[33395]: 2025-10-05 07:45:09.814776365 +0000 UTC m=+0.166322187 container died 3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_joliot, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , version=7, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:45:09 localhost podman[33415]: 2025-10-05 07:45:09.91421879 +0000 UTC m=+0.087725452 container remove 3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_joliot, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:45:09 localhost systemd[1]: libpod-conmon-3a2503815db784b6bdbfdd235ad784fbf006fc473dfd5f521088fc5b812d8c90.scope: Deactivated successfully. Oct 5 03:45:09 localhost ceph-osd[32468]: osd.3 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.477 iops: 7802.168 elapsed_sec: 0.385 Oct 5 03:45:09 localhost ceph-osd[32468]: log_channel(cluster) log [WRN] : OSD bench result of 7802.168182 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Oct 5 03:45:09 localhost ceph-osd[32468]: osd.3 0 waiting for initial osdmap Oct 5 03:45:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3[32464]: 2025-10-05T07:45:09.991+0000 7fc6cf645640 -1 osd.3 0 waiting for initial osdmap Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 crush map has features 288514051259236352, adjusting msgr requires for clients Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 check_osdmap_features require_osd_release unknown -> reef Oct 5 03:45:10 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-osd-3[32464]: 2025-10-05T07:45:10.012+0000 7fc6cac6f640 -1 osd.3 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 set_numa_affinity not setting numa affinity Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 15 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Oct 5 03:45:10 localhost podman[33438]: Oct 5 03:45:10 localhost podman[33438]: 2025-10-05 07:45:10.106997835 +0000 UTC m=+0.065815084 container create 96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_volhard, version=7, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 03:45:10 localhost systemd[1]: Started libpod-conmon-96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f.scope. Oct 5 03:45:10 localhost systemd[1]: Started libcrun container. Oct 5 03:45:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a205fd395b2af1010607559439e4656de9d7ad5eea2c1ef9968dd30f428a75e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:10 localhost podman[33438]: 2025-10-05 07:45:10.080461574 +0000 UTC m=+0.039278853 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 03:45:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a205fd395b2af1010607559439e4656de9d7ad5eea2c1ef9968dd30f428a75e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a205fd395b2af1010607559439e4656de9d7ad5eea2c1ef9968dd30f428a75e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 03:45:10 localhost podman[33438]: 2025-10-05 07:45:10.208630969 +0000 UTC m=+0.167448228 container init 96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_volhard, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, architecture=x86_64) Oct 5 03:45:10 localhost podman[33438]: 2025-10-05 07:45:10.218928944 +0000 UTC m=+0.177746203 container start 96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_volhard, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 03:45:10 localhost podman[33438]: 2025-10-05 07:45:10.219196981 +0000 UTC m=+0.178014230 container attach 96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_volhard, version=7, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., release=553, RELEASE=main, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 03:45:10 localhost ceph-osd[32468]: osd.3 16 state: booting -> active Oct 5 03:45:10 localhost ceph-osd[31524]: osd.0 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=15) [2,0,4] r=1 lpr=15 pi=[14,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 03:45:10 localhost systemd[1]: var-lib-containers-storage-overlay-30dc7fcf6df221a4fca0e3c9c11e8121124047e150b66323eb996327bb1ddb77-merged.mount: Deactivated successfully. Oct 5 03:45:11 localhost jolly_volhard[33453]: [ Oct 5 03:45:11 localhost jolly_volhard[33453]: { Oct 5 03:45:11 localhost jolly_volhard[33453]: "available": false, Oct 5 03:45:11 localhost jolly_volhard[33453]: "ceph_device": false, Oct 5 03:45:11 localhost jolly_volhard[33453]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 03:45:11 localhost jolly_volhard[33453]: "lsm_data": {}, Oct 5 03:45:11 localhost jolly_volhard[33453]: "lvs": [], Oct 5 03:45:11 localhost jolly_volhard[33453]: "path": "/dev/sr0", Oct 5 03:45:11 localhost jolly_volhard[33453]: "rejected_reasons": [ Oct 5 03:45:11 localhost jolly_volhard[33453]: "Insufficient space (<5GB)", Oct 5 03:45:11 localhost jolly_volhard[33453]: "Has a FileSystem" Oct 5 03:45:11 localhost jolly_volhard[33453]: ], Oct 5 03:45:11 localhost jolly_volhard[33453]: "sys_api": { Oct 5 03:45:11 localhost jolly_volhard[33453]: "actuators": null, Oct 5 03:45:11 localhost jolly_volhard[33453]: "device_nodes": "sr0", Oct 5 03:45:11 localhost jolly_volhard[33453]: "human_readable_size": "482.00 KB", Oct 5 03:45:11 localhost jolly_volhard[33453]: "id_bus": "ata", Oct 5 03:45:11 localhost jolly_volhard[33453]: "model": "QEMU DVD-ROM", Oct 5 03:45:11 localhost jolly_volhard[33453]: "nr_requests": "2", Oct 5 03:45:11 localhost jolly_volhard[33453]: "partitions": {}, Oct 5 03:45:11 localhost jolly_volhard[33453]: "path": "/dev/sr0", Oct 5 03:45:11 localhost jolly_volhard[33453]: "removable": "1", Oct 5 03:45:11 localhost jolly_volhard[33453]: "rev": "2.5+", Oct 5 03:45:11 localhost jolly_volhard[33453]: "ro": "0", Oct 5 03:45:11 localhost jolly_volhard[33453]: "rotational": "1", Oct 5 03:45:11 localhost jolly_volhard[33453]: "sas_address": "", Oct 5 03:45:11 localhost jolly_volhard[33453]: "sas_device_handle": "", Oct 5 03:45:11 localhost jolly_volhard[33453]: "scheduler_mode": "mq-deadline", Oct 5 03:45:11 localhost jolly_volhard[33453]: "sectors": 0, Oct 5 03:45:11 localhost jolly_volhard[33453]: "sectorsize": "2048", Oct 5 03:45:11 localhost jolly_volhard[33453]: "size": 493568.0, Oct 5 03:45:11 localhost jolly_volhard[33453]: "support_discard": "0", Oct 5 03:45:11 localhost jolly_volhard[33453]: "type": "disk", Oct 5 03:45:11 localhost jolly_volhard[33453]: "vendor": "QEMU" Oct 5 03:45:11 localhost jolly_volhard[33453]: } Oct 5 03:45:11 localhost jolly_volhard[33453]: } Oct 5 03:45:11 localhost jolly_volhard[33453]: ] Oct 5 03:45:11 localhost systemd[1]: libpod-96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f.scope: Deactivated successfully. Oct 5 03:45:11 localhost podman[33438]: 2025-10-05 07:45:11.044521805 +0000 UTC m=+1.003339034 container died 96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_volhard, io.openshift.tags=rhceph ceph, ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True) Oct 5 03:45:11 localhost systemd[1]: var-lib-containers-storage-overlay-4a205fd395b2af1010607559439e4656de9d7ad5eea2c1ef9968dd30f428a75e-merged.mount: Deactivated successfully. Oct 5 03:45:11 localhost podman[34830]: 2025-10-05 07:45:11.137238399 +0000 UTC m=+0.082032639 container remove 96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_volhard, architecture=x86_64, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git) Oct 5 03:45:11 localhost systemd[1]: libpod-conmon-96080f2570e77a35a81bb18cef9e455f2d8a0a04cd3e08a0446e2ec0cba68b0f.scope: Deactivated successfully. Oct 5 03:45:20 localhost systemd[1]: tmp-crun.2nmaLQ.mount: Deactivated successfully. Oct 5 03:45:20 localhost systemd[25997]: Starting Mark boot as successful... Oct 5 03:45:20 localhost podman[34960]: 2025-10-05 07:45:20.81847063 +0000 UTC m=+0.084351411 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, release=553, ceph=True) Oct 5 03:45:20 localhost systemd[25997]: Finished Mark boot as successful. Oct 5 03:45:20 localhost podman[34960]: 2025-10-05 07:45:20.922751644 +0000 UTC m=+0.188632395 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, release=553, io.buildah.version=1.33.12, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Oct 5 03:46:22 localhost podman[35136]: 2025-10-05 07:46:22.563057002 +0000 UTC m=+0.080135143 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-type=git, distribution-scope=public, RELEASE=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=553, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True) Oct 5 03:46:22 localhost podman[35136]: 2025-10-05 07:46:22.675017512 +0000 UTC m=+0.192095673 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=553, name=rhceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 03:46:27 localhost systemd[1]: session-13.scope: Deactivated successfully. Oct 5 03:46:27 localhost systemd[1]: session-13.scope: Consumed 20.856s CPU time. Oct 5 03:46:27 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Oct 5 03:46:27 localhost systemd-logind[760]: Removed session 13. Oct 5 03:48:41 localhost systemd[25997]: Created slice User Background Tasks Slice. Oct 5 03:48:41 localhost systemd[25997]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 03:48:41 localhost systemd[25997]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 03:49:46 localhost sshd[35506]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:49:46 localhost systemd-logind[760]: New session 27 of user zuul. Oct 5 03:49:46 localhost systemd[1]: Started Session 27 of User zuul. Oct 5 03:49:46 localhost python3[35554]: ansible-ansible.legacy.ping Invoked with data=pong Oct 5 03:49:47 localhost python3[35599]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:49:48 localhost python3[35619]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471152.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 5 03:49:48 localhost python3[35675]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:49:49 localhost python3[35718]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759650588.555187-66997-142056929312749/source _original_basename=tmpnk6_e1i4 follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:49 localhost python3[35748]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:50 localhost python3[35764]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:50 localhost python3[35780]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:51 localhost python3[35796]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCokTnmuGGd7FqRt5lj7gy5ajM+x5MUcAES6KHeKcIlL/nEoTFWT2pxSuY+fKFL+y2KYf+6oN93PEqRhUrqK2OOYUXtho0LDFtu5p6gjNED7yqT3QdloUz24ZocJwkvACOLzZUVodN8WbszwjHIXDgEmGzISTzBUv3K1tepuhLyXXYo5ZhGR4g6xCjmEdTXHh9xPBWaJsq9zbCKdCa2R9nrUg4XgJaeauPFw9xvXeVAt24suKGOqgvMt5SLNOLC+dpMArRnnHnnf2oX75R2U27XujmhLVCj1FHPm5c9KtI5iD64zALdWHikrsXHqmuOlvS0Z1+qD1nSYQCKhVL+CILWhe4Ln2wf+5jXsQi29MNjYHQYCpA3fJDgLPl21lh1O0NyNuWRIos30+GxjDjgv+5j7ZnLd3n5ddE4Z75kUN2CtT+V4BAf6dJCtSQTzfSP2deyneYganl9EXtfuPVVZI5Ot8j4UQ9dJYXfzmCmvtsNhzNcF7fHuPsD2k55iE8qO3c= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:49:51 localhost python3[35810]: ansible-ping Invoked with data=pong Oct 5 03:50:02 localhost sshd[35811]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:50:02 localhost systemd-logind[760]: New session 28 of user tripleo-admin. Oct 5 03:50:02 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 5 03:50:02 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 5 03:50:02 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 5 03:50:02 localhost systemd[1]: Starting User Manager for UID 1003... Oct 5 03:50:02 localhost systemd[35815]: Queued start job for default target Main User Target. Oct 5 03:50:02 localhost systemd[35815]: Created slice User Application Slice. Oct 5 03:50:02 localhost systemd[35815]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 03:50:02 localhost systemd[35815]: Started Daily Cleanup of User's Temporary Directories. Oct 5 03:50:02 localhost systemd[35815]: Reached target Paths. Oct 5 03:50:02 localhost systemd[35815]: Reached target Timers. Oct 5 03:50:02 localhost systemd[35815]: Starting D-Bus User Message Bus Socket... Oct 5 03:50:02 localhost systemd[35815]: Starting Create User's Volatile Files and Directories... Oct 5 03:50:02 localhost systemd[35815]: Listening on D-Bus User Message Bus Socket. Oct 5 03:50:02 localhost systemd[35815]: Reached target Sockets. Oct 5 03:50:02 localhost systemd[35815]: Finished Create User's Volatile Files and Directories. Oct 5 03:50:02 localhost systemd[35815]: Reached target Basic System. Oct 5 03:50:02 localhost systemd[35815]: Reached target Main User Target. Oct 5 03:50:02 localhost systemd[35815]: Startup finished in 119ms. Oct 5 03:50:02 localhost systemd[1]: Started User Manager for UID 1003. Oct 5 03:50:02 localhost systemd[1]: Started Session 28 of User tripleo-admin. Oct 5 03:50:03 localhost python3[35875]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Oct 5 03:50:08 localhost python3[35895]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Oct 5 03:50:08 localhost python3[35911]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Oct 5 03:50:09 localhost python3[35959]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.xhkukgi8tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:09 localhost python3[35989]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.xhkukgi8tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:10 localhost python3[36005]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.xhkukgi8tmphosts insertbefore=BOF block=172.17.0.106 np0005471150.localdomain np0005471150#012172.18.0.106 np0005471150.storage.localdomain np0005471150.storage#012172.20.0.106 np0005471150.storagemgmt.localdomain np0005471150.storagemgmt#012172.17.0.106 np0005471150.internalapi.localdomain np0005471150.internalapi#012172.19.0.106 np0005471150.tenant.localdomain np0005471150.tenant#012192.168.122.106 np0005471150.ctlplane.localdomain np0005471150.ctlplane#012172.17.0.107 np0005471151.localdomain np0005471151#012172.18.0.107 np0005471151.storage.localdomain np0005471151.storage#012172.20.0.107 np0005471151.storagemgmt.localdomain np0005471151.storagemgmt#012172.17.0.107 np0005471151.internalapi.localdomain np0005471151.internalapi#012172.19.0.107 np0005471151.tenant.localdomain np0005471151.tenant#012192.168.122.107 np0005471151.ctlplane.localdomain np0005471151.ctlplane#012172.17.0.108 np0005471152.localdomain np0005471152#012172.18.0.108 np0005471152.storage.localdomain np0005471152.storage#012172.20.0.108 np0005471152.storagemgmt.localdomain np0005471152.storagemgmt#012172.17.0.108 np0005471152.internalapi.localdomain np0005471152.internalapi#012172.19.0.108 np0005471152.tenant.localdomain np0005471152.tenant#012192.168.122.108 np0005471152.ctlplane.localdomain np0005471152.ctlplane#012172.17.0.103 np0005471146.localdomain np0005471146#012172.18.0.103 np0005471146.storage.localdomain np0005471146.storage#012172.20.0.103 np0005471146.storagemgmt.localdomain np0005471146.storagemgmt#012172.17.0.103 np0005471146.internalapi.localdomain np0005471146.internalapi#012172.19.0.103 np0005471146.tenant.localdomain np0005471146.tenant#012192.168.122.103 np0005471146.ctlplane.localdomain np0005471146.ctlplane#012172.17.0.104 np0005471147.localdomain np0005471147#012172.18.0.104 np0005471147.storage.localdomain np0005471147.storage#012172.20.0.104 np0005471147.storagemgmt.localdomain np0005471147.storagemgmt#012172.17.0.104 np0005471147.internalapi.localdomain np0005471147.internalapi#012172.19.0.104 np0005471147.tenant.localdomain np0005471147.tenant#012192.168.122.104 np0005471147.ctlplane.localdomain np0005471147.ctlplane#012172.17.0.105 np0005471148.localdomain np0005471148#012172.18.0.105 np0005471148.storage.localdomain np0005471148.storage#012172.20.0.105 np0005471148.storagemgmt.localdomain np0005471148.storagemgmt#012172.17.0.105 np0005471148.internalapi.localdomain np0005471148.internalapi#012172.19.0.105 np0005471148.tenant.localdomain np0005471148.tenant#012192.168.122.105 np0005471148.ctlplane.localdomain np0005471148.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.178 overcloud.storage.localdomain#012172.20.0.167 overcloud.storagemgmt.localdomain#012172.17.0.227 overcloud.internalapi.localdomain#012172.21.0.204 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:11 localhost python3[36021]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.xhkukgi8tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:50:11 localhost python3[36038]: ansible-file Invoked with path=/tmp/ansible.xhkukgi8tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:50:12 localhost python3[36054]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:50:14 localhost python3[36071]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:50:18 localhost python3[36090]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:50:19 localhost python3[36107]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:51:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=5 res=1 Oct 5 03:51:30 localhost kernel: SELinux: Converting 2699 SID table entries... Oct 5 03:51:30 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:51:30 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:51:30 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:51:30 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=6 res=1 Oct 5 03:51:30 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:51:30 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:51:30 localhost systemd[1]: Reloading. Oct 5 03:51:31 localhost systemd-rc-local-generator[37829]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:51:31 localhost systemd-sysv-generator[37835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:51:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:51:31 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:51:31 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:51:31 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:51:31 localhost systemd[1]: run-rfeea02d5e8d049f0907206d8dbea5bdc.service: Deactivated successfully. Oct 5 03:51:32 localhost python3[38259]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:34 localhost python3[38398]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:51:34 localhost systemd[1]: Reloading. Oct 5 03:51:34 localhost systemd-rc-local-generator[38428]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:51:34 localhost systemd-sysv-generator[38431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:51:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:51:35 localhost python3[38452]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:35 localhost python3[38468]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:36 localhost python3[38485]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 03:51:36 localhost python3[38503]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:37 localhost python3[38521]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:37 localhost python3[38539]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:51:37 localhost systemd[1]: Reloading Network Manager... Oct 5 03:51:37 localhost NetworkManager[5970]: [1759650697.9201] audit: op="reload" arg="0" pid=38542 uid=0 result="success" Oct 5 03:51:37 localhost NetworkManager[5970]: [1759650697.9210] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Oct 5 03:51:37 localhost NetworkManager[5970]: [1759650697.9210] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Oct 5 03:51:37 localhost systemd[1]: Reloaded Network Manager. Oct 5 03:51:39 localhost python3[38558]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:39 localhost python3[38575]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:40 localhost python3[38593]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:40 localhost python3[38609]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:41 localhost python3[38625]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Oct 5 03:51:41 localhost python3[38641]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:42 localhost python3[38657]: ansible-blockinfile Invoked with path=/tmp/ansible.hb2aea3s block=[192.168.122.106]*,[np0005471150.ctlplane.localdomain]*,[172.17.0.106]*,[np0005471150.internalapi.localdomain]*,[172.18.0.106]*,[np0005471150.storage.localdomain]*,[172.20.0.106]*,[np0005471150.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005471150.tenant.localdomain]*,[np0005471150.localdomain]*,[np0005471150]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCT5ftkzxR2Qyrkv4Bog+udHavLt9s9Di0AWsGW2RuyQQiM22RbERlEwcEpl46d2UZEA/h4vz9TbE4fxIRY43XsuoO7kScaRsaDEk80scoEanpXJXpL99y+HtDr7IiFnp920RFZWAvClhPuG5f4GTZcAH8JwlQdHLoU08owfBRpfZmDNZcoyX0tprcWQCD7KMlzpxwZFqhjkJVPrnq3lxWA9cG87b9CDA6sHuH8h4RYjBBtCOkxgTVQgBjGVWWjO64RQXgkKPObBX3sBjTYorcuu5af6cl8pwRuWCIDiskwHVqEvsdx7nXa+8le2b250IQoHti8LislYbkhX/LUO0TmKGbvUuzaK3gsuRGLxf+qG4UdCa7CYecLosB0sg0pv7c95e80sFtLwEFyKvUkMfEdbFIxMr03gd1i6lSeafCtY9Xk0sjkbJpMGaj2hsNlv1S6X8taFEHFuQyDEZ3ZkQXwxYkb0pqUef9Fn6d2VvlP4u7GHH+iQZtgv7NZrxvZOos=#012[192.168.122.107]*,[np0005471151.ctlplane.localdomain]*,[172.17.0.107]*,[np0005471151.internalapi.localdomain]*,[172.18.0.107]*,[np0005471151.storage.localdomain]*,[172.20.0.107]*,[np0005471151.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005471151.tenant.localdomain]*,[np0005471151.localdomain]*,[np0005471151]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeDNxXs+ZUIP9/a2zVFllXGXsP2/RtUXLMLDP4YL71gvVrRf+MpnYrvCNPSMtaio8hFnrpiDFXxbT/vT8cGaq0VtYxjMm6ggMMEpJTsx2xG5zkDW3nbKnfBWdlrf2h3+WUBHOB9mofrB5CT0cuNDshy8Zq3cPyqMZVPdJXPIH+fsWD+b65aHwAk93ThJehxt/nPEDADcRKHLYFTlAyvnZ5aEvqj714SQIjwLcSkgaTfu3JmjF9FllzZz3DKBld7fRbggrz2rkww5yxrvj9W/KsoSugYq1N+fEEWdUonP/PYnRfJ9Qe+OMV5TmEEYuUOqPqaVs8vMZI4zYb3l5asdknHsN0N3URQbZANs9Fettfh3uoOPlyegvPjIMukQ8KZAy+KQWSAzho7RnR5ULuWVNi7Rj9mFC01wy0778Zqb7BlWc+Yn3kNXEkR9u1vQjBq7B+Ie922b6pYARzXmaE2yjzI7QdYo1IB/o9UIP/zEfugki+28qB0215MGXrk3EqTk8=#012[192.168.122.108]*,[np0005471152.ctlplane.localdomain]*,[172.17.0.108]*,[np0005471152.internalapi.localdomain]*,[172.18.0.108]*,[np0005471152.storage.localdomain]*,[172.20.0.108]*,[np0005471152.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005471152.tenant.localdomain]*,[np0005471152.localdomain]*,[np0005471152]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQL9bjzo5YAISp2Bxwtb4g1hALXPqelm3WBGwGfh3/tRyDvnxqpgAH4BkgnyM92vRVDUZgylBjfJ54aevQzR0sxDWI5un2tTEepezxrrMvJNDvOss/fCLi88oah/o3qw++j3XWh7zZNBR2ZlXoM/pIxbee1SynEGOX2B0csXrd1qrshg6L4eHx3xP0RwAulzm5seEcMLqx8KH2dq77wY0VqQkpaFyFb7FqX77rxq/UKPpgE0srhO8SRvE9De5pNe/qOciIyF6dgzu5EyyHu7KYjTILbMKxDa32WE/P2Rf7vIscc9uCS7JGMjSz6NeeFnpRpsv8N/pMUGyuUGsD1ZchAk2FVF+E5cZtF04URyBXHR3aMjxItV46eMTahkYu0ieB5XIe1ht+1mpTNW5HuK+c5IGVa1+5Y3udf7NKVNLxbJKJpiyb1+mVhhrwPzJFaIuMT3y2IHiF3xGDIof8BMBzvhUW/T0WYISPRdb3hpP5yODYfEz7Mmnpe6mZj+mFVVc=#012[192.168.122.103]*,[np0005471146.ctlplane.localdomain]*,[172.17.0.103]*,[np0005471146.internalapi.localdomain]*,[172.18.0.103]*,[np0005471146.storage.localdomain]*,[172.20.0.103]*,[np0005471146.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005471146.tenant.localdomain]*,[np0005471146.localdomain]*,[np0005471146]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB7OvQtGFS2ddbuT67PLzOZMMKExXKgLGlJbGmtwnZie42R//csfGTuDcY5sTL5gAKr5LgWtvuSJPxC5H8l1UXw+Jr1ot425wmg47AIcheuJNQqzQ7tPAGH3PICnVC6aPHAOVRVF+gH7UOtvdgmSE7iMATMRPcUy2tqR8NCuKKvzDeS/2RQXJpgWok3C9RwXiVS5oUv9jUyevFtgntUOYojmdQgQKC7AwBkYfT7TF3CJZYryU/VVFtwd7a/UiSCw5QLoTN8NxCyROZfFtmylvUybp8RdUroQiriJw1zcQyVLsXbwq0clpb5hc+/3tQLZv3a6JrVpp5DZq+MW98UkErXy11sX4Mk9e2seewM0xMkdGzMReNlZqtUWLIISbhxkBby9gn3WRKG32HdCCSD66ZhNAfOCfpaO3dNiCRUyzYoh4WRF7pu7nwBQ/eTQp8SGptdGGHUf0XF9tqRWjj2nrVrHHOnbj/9clk9VdTU6dbcxFoz3X5SWbovR40rDPz6e0=#012[192.168.122.104]*,[np0005471147.ctlplane.localdomain]*,[172.17.0.104]*,[np0005471147.internalapi.localdomain]*,[172.18.0.104]*,[np0005471147.storage.localdomain]*,[172.20.0.104]*,[np0005471147.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005471147.tenant.localdomain]*,[np0005471147.localdomain]*,[np0005471147]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz7dSoZhAVsu7Q6pQ5T0a3vdxjM8VsWq083YCwmW5ZBuWxtpO+ywiBUZXF2GXQh83uhFPjTL6AVFeIX5lNLPi70M1qL6Twe/O2mk2gSzlx225JQnN98IGNIaiWFoDWJeh+QC5ahKjsZLMqt7JQaJMEu8Y+pNNhDzn+mrA5SQL/4KeoVuUMVnHW606U26xi/2P8WkxBdjPuLtDQdFdmprrS1/lNbxCAMj0MhrqsxbpX9uLe04KqrNXmsaTlvu+XKlf2y7mxaihY81Qbyf86Guw2DS8EIhDZjC2olPxoqJJn5ZAGtvtc/FzkH/pbbMy1CbD6OnTFGsUHbZKS9eBF7PtpLp3YiUp/FyRfiyxmtelUycYx7bqdixnmEGj4O2Ju2ehdpxO1RyBRyrfUelVA8bfBft6yd41RwKwujj5OtnOXzqb7I8O83ZgbDm6oUjTG+59hElsoR3PI5ow3C3NTrDQxwesLfuTjCrjHCWnvKIQb51xqtNRDT8PTStx27/FxOJ0=#012[192.168.122.105]*,[np0005471148.ctlplane.localdomain]*,[172.17.0.105]*,[np0005471148.internalapi.localdomain]*,[172.18.0.105]*,[np0005471148.storage.localdomain]*,[172.20.0.105]*,[np0005471148.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005471148.tenant.localdomain]*,[np0005471148.localdomain]*,[np0005471148]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCav0eZ81SP1lgxNKp8kzS2MGddVZXD3CnfZarlQErB75DRL4T/NvcVXnfxKn4UPX+h1zwIlKhrD0kHzKTVqifYPUqAmLb8rYREMTmXhQxto2b7VGPMQJtDAprHqyUEFlSdV8NbN3SVctntX/mSKO9bD06JFfa3F62ItPVHy6SnAKMzgNdSszOdKFvbEzC2oxcehr1uB2BAOIiTb1KxyTjXhvXZSYUsBxiGWPOP83oZQxCJlh/VjIUu6P2F6+mv1415n4ujbEujO8/iVbBF1uy28bTobQfABbfPNDNUCd9Gr+xDlT4JuuYTcjqG+gr3yvctzwj/+lxYcJbC0ZYtRhJ0pu8gjm44UFVFCpPxwPpvkKV5n+jU3uaSX98EZpaTlK51qqfwX29LxmMKs3pezfixQ67KCoq1jcDNXUiZpX9svKFD2Drlx+6s9pBkQGZcsmVNiCKQBJmrpFCgYhAPOEIjAGPkic0qp+pAaJtQpB/gYfF/cNCJmCm80s5s/jRuSOs=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:42 localhost python3[38673]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hb2aea3s' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:43 localhost python3[38691]: ansible-file Invoked with path=/tmp/ansible.hb2aea3s state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:51:44 localhost python3[38707]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 03:51:44 localhost python3[38723]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:44 localhost python3[38741]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:45 localhost python3[38760]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Oct 5 03:51:47 localhost python3[38897]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:48 localhost python3[38914]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:51:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:51:51 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:51:51 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:51:51 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:51:51 localhost systemd[1]: Reloading. Oct 5 03:51:51 localhost systemd-rc-local-generator[38986]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:51:51 localhost systemd-sysv-generator[38990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:51:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:51:51 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:51:51 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 5 03:51:51 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 5 03:51:51 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 5 03:51:51 localhost systemd[1]: tuned.service: Consumed 1.584s CPU time. Oct 5 03:51:51 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 03:51:51 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:51:51 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:51:51 localhost systemd[1]: run-rf8599b1b962141e6a53b77738c26702a.service: Deactivated successfully. Oct 5 03:51:53 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 03:51:53 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:51:53 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:51:53 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:51:53 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:51:53 localhost systemd[1]: run-rc1c1dbd63f744b93a7c0d1087d9f9515.service: Deactivated successfully. Oct 5 03:51:54 localhost python3[39351]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:51:54 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 5 03:51:54 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 5 03:51:54 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 5 03:51:54 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 03:51:55 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 03:51:56 localhost python3[39546]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:51:57 localhost python3[39563]: ansible-slurp Invoked with src=/etc/tuned/active_profile Oct 5 03:51:57 localhost python3[39579]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:51:58 localhost python3[39595]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:00 localhost python3[39615]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:00 localhost python3[39632]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:52:03 localhost python3[39648]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:08 localhost python3[39664]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:08 localhost python3[39712]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:09 localhost python3[39757]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650728.6370091-71409-209022859280822/source _original_basename=tmpff3sqvec follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:09 localhost python3[39787]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:10 localhost python3[39835]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:10 localhost python3[39878]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650730.266981-71635-204298937296698/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=43b29c8557766d8327a1fa06529a284fbedbdaa9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:11 localhost python3[39940]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:11 localhost python3[39983]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650731.2074308-71697-119720838910762/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=48c763e87e973e17d11bd4dcd68a412176c73bf2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:12 localhost python3[40045]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:12 localhost python3[40088]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650732.104602-71697-37482039495504/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=97e470c59032f2514ad5196642ab40dc0e60ec7a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:13 localhost python3[40150]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:13 localhost python3[40193]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650733.0081031-71697-229832119635182/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:14 localhost python3[40255]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:14 localhost python3[40298]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650733.9730947-71697-193096629056225/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:15 localhost python3[40360]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:15 localhost python3[40403]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650734.8102467-71697-237170913994089/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=65aaa001cbf82660628cbb5773f7503704b1542e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:16 localhost python3[40465]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:16 localhost python3[40508]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650735.6982684-71697-113180479221177/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:16 localhost python3[40570]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:17 localhost python3[40613]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650736.532384-71697-181091639995019/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=eec99266e2b532da3b9cbf709d99ea3775a9e36f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:17 localhost python3[40675]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:18 localhost python3[40718]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650737.3603592-71697-101288414874183/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:18 localhost python3[40780]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:18 localhost python3[40823]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650738.2363904-71697-8345201837070/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:19 localhost python3[40885]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:19 localhost python3[40928]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650739.0987594-71697-73109650435757/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=03498d2d6d62880127fd14c74761b0d53419095c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:20 localhost python3[40958]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:52:21 localhost python3[41006]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:52:21 localhost python3[41049]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650741.008398-72526-145172108781139/source _original_basename=tmpp4zpsfvd follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:52:26 localhost python3[41079]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 03:52:26 localhost python3[41140]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:31 localhost python3[41157]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:36 localhost python3[41250]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:37 localhost python3[41273]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:41 localhost python3[41290]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:41 localhost systemd[35815]: Starting Mark boot as successful... Oct 5 03:52:41 localhost systemd[35815]: Finished Mark boot as successful. Oct 5 03:52:42 localhost python3[41314]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:46 localhost python3[41331]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:51 localhost python3[41348]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:51 localhost python3[41371]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:52:56 localhost python3[41388]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:01 localhost python3[41405]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:02 localhost python3[41428]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:06 localhost python3[41445]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:11 localhost python3[41462]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:11 localhost python3[41485]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:15 localhost python3[41502]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:21 localhost python3[41519]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:21 localhost python3[41567]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:21 localhost python3[41585]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpdb9vcwys recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:22 localhost python3[41615]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:23 localhost python3[41663]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:23 localhost python3[41681]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:23 localhost python3[41743]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:24 localhost python3[41761]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:24 localhost python3[41823]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:24 localhost python3[41841]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:25 localhost python3[41903]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:25 localhost python3[41921]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:26 localhost python3[41983]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:26 localhost python3[42001]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:26 localhost python3[42063]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:27 localhost python3[42081]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:27 localhost python3[42143]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:27 localhost python3[42161]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:28 localhost python3[42223]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:28 localhost python3[42241]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:29 localhost python3[42303]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:29 localhost python3[42321]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:29 localhost python3[42383]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:30 localhost python3[42401]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:30 localhost python3[42463]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:30 localhost python3[42481]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:31 localhost python3[42511]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:53:32 localhost python3[42559]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:32 localhost python3[42577]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpye3f1puv recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:35 localhost python3[42669]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:53:39 localhost python3[42701]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:53:40 localhost python3[42719]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:53:40 localhost python3[42737]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:53:40 localhost systemd[1]: Reloading. Oct 5 03:53:41 localhost systemd-rc-local-generator[42761]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:53:41 localhost systemd-sysv-generator[42765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:53:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:53:41 localhost systemd[1]: Starting Netfilter Tables... Oct 5 03:53:41 localhost systemd[1]: Finished Netfilter Tables. Oct 5 03:53:41 localhost python3[42827]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:42 localhost python3[42870]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650821.6921558-75105-111451525815351/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:42 localhost python3[42900]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:43 localhost python3[42918]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:43 localhost python3[42967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:44 localhost python3[43010]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650823.5716882-75342-185306702531045/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:44 localhost python3[43072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:45 localhost python3[43115]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650824.503501-75403-221578661719486/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:45 localhost python3[43177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:46 localhost python3[43220]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650825.5181339-75467-128389055612374/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:46 localhost python3[43282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:47 localhost python3[43325]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650826.4577177-75618-196105794590876/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:48 localhost python3[43387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:53:48 localhost python3[43430]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650827.3266978-75667-143912319223137/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:48 localhost python3[43460]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:49 localhost python3[43525]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:53:49 localhost python3[43542]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:50 localhost python3[43559]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:50 localhost python3[43578]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:50 localhost python3[43594]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:51 localhost python3[43610]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:51 localhost python3[43626]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 5 03:53:53 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=7 res=1 Oct 5 03:53:53 localhost python3[43646]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:53:53 localhost kernel: SELinux: Converting 2703 SID table entries... Oct 5 03:53:53 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:53:53 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:53:53 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:53:53 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:53:53 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:53:53 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:53:53 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:53:54 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=8 res=1 Oct 5 03:53:54 localhost python3[43667]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:53:55 localhost kernel: SELinux: Converting 2703 SID table entries... Oct 5 03:53:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:53:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:53:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:53:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=9 res=1 Oct 5 03:53:55 localhost python3[43688]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:53:56 localhost kernel: SELinux: Converting 2703 SID table entries... Oct 5 03:53:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:53:56 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:53:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:53:56 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:53:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:53:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:53:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:53:56 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=10 res=1 Oct 5 03:53:57 localhost python3[43709]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:57 localhost python3[43725]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:57 localhost python3[43741]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:53:57 localhost python3[43757]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:53:58 localhost python3[43773]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:53:59 localhost python3[43790]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:02 localhost python3[43807]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:03 localhost python3[43855]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:03 localhost python3[43898]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650842.8497422-76521-222318980530384/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:04 localhost python3[43928]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:54:04 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 03:54:04 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 03:54:04 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 03:54:04 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 03:54:04 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 5 03:54:04 localhost kernel: Bridge firewalling registered Oct 5 03:54:04 localhost systemd-modules-load[43931]: Inserted module 'br_netfilter' Oct 5 03:54:04 localhost systemd-modules-load[43931]: Module 'msr' is built in Oct 5 03:54:04 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 03:54:04 localhost python3[43982]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:05 localhost python3[44025]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650844.3779948-76586-37954587017707/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:05 localhost python3[44055]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:05 localhost python3[44072]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:06 localhost python3[44090]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:06 localhost python3[44108]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:06 localhost python3[44125]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:07 localhost python3[44142]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:07 localhost python3[44159]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:07 localhost python3[44177]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[44195]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[44213]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[44231]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:08 localhost python3[44249]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:09 localhost python3[44267]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:09 localhost python3[44285]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:09 localhost python3[44302]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:10 localhost python3[44319]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:10 localhost python3[44336]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:10 localhost python3[44353]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Oct 5 03:54:11 localhost python3[44371]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 03:54:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 5 03:54:11 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 5 03:54:11 localhost systemd[1]: Stopping Apply Kernel Variables... Oct 5 03:54:11 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 03:54:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 5 03:54:11 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 03:54:11 localhost python3[44391]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:11 localhost python3[44407]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:12 localhost python3[44423]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:12 localhost python3[44439]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:54:13 localhost python3[44455]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:13 localhost python3[44471]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:13 localhost python3[44487]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:13 localhost python3[44503]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:14 localhost python3[44519]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:14 localhost python3[44567]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:15 localhost python3[44610]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650854.388275-76977-236835817241780/source _original_basename=tmpvnw_bweh follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:15 localhost python3[44640]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:16 localhost python3[44657]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:17 localhost python3[44705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:17 localhost python3[44748]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650856.9746485-77140-217505447244414/source _original_basename=tmp0vnzured follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:18 localhost python3[44778]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:18 localhost python3[44794]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:19 localhost python3[44810]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:19 localhost python3[44826]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:19 localhost python3[44842]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:19 localhost python3[44858]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:20 localhost python3[44874]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:20 localhost python3[44890]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:20 localhost python3[44906]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:21 localhost python3[44922]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Oct 5 03:54:21 localhost python3[44944]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471152.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Oct 5 03:54:22 localhost python3[44968]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Oct 5 03:54:22 localhost python3[44984]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:23 localhost python3[45033]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:23 localhost python3[45076]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650863.0207257-77533-68618262365461/source _original_basename=tmpu4rou0n2 follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:24 localhost python3[45106]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Oct 5 03:54:24 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=11 res=1 Oct 5 03:54:25 localhost python3[45127]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:25 localhost python3[45143]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Oct 5 03:54:26 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=12 res=1 Oct 5 03:54:27 localhost python3[45163]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:30 localhost python3[45180]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 03:54:30 localhost python3[45241]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:31 localhost python3[45257]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:31 localhost python3[45317]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:32 localhost python3[45360]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650871.5150225-77918-201862591495197/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=cdcdea86ab6e7874963fbcc658fe72a6456401a5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:32 localhost python3[45422]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:33 localhost python3[45467]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650872.5014145-77973-183405015185460/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:33 localhost python3[45497]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:33 localhost python3[45513]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:34 localhost python3[45529]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:34 localhost python3[45545]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:35 localhost python3[45593]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:35 localhost python3[45636]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650874.817722-78086-256735027034012/source _original_basename=tmpb79zdwrm follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:35 localhost python3[45666]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:36 localhost python3[45682]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:36 localhost python3[45728]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:40 localhost python3[45874]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:40 localhost python3[45919]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650879.8810313-78270-141599343912479/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:41 localhost python3[45950]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:54:41 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 03:54:41 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 03:54:41 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 03:54:41 localhost systemd[1]: sshd.service: Consumed 1.990s CPU time, read 1.9M from disk, written 4.0K to disk. Oct 5 03:54:41 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 03:54:41 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 03:54:41 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 03:54:41 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 03:54:41 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 03:54:41 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 03:54:41 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 03:54:41 localhost sshd[45954]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:54:41 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 03:54:41 localhost python3[45970]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:42 localhost python3[45988]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:43 localhost python3[46006]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:54:46 localhost python3[46055]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:46 localhost python3[46073]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:47 localhost python3[46103]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:54:48 localhost python3[46153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:48 localhost python3[46171]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:49 localhost python3[46201]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:54:49 localhost systemd[1]: Reloading. Oct 5 03:54:49 localhost systemd-rc-local-generator[46228]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:54:49 localhost systemd-sysv-generator[46231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:54:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:54:49 localhost systemd[1]: Starting chronyd online sources service... Oct 5 03:54:49 localhost chronyc[46241]: 200 OK Oct 5 03:54:49 localhost systemd[1]: chrony-online.service: Deactivated successfully. Oct 5 03:54:49 localhost systemd[1]: Finished chronyd online sources service. Oct 5 03:54:50 localhost python3[46257]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:50 localhost chronyd[25796]: System clock was stepped by -0.000090 seconds Oct 5 03:54:50 localhost python3[46274]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:50 localhost python3[46291]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:50 localhost chronyd[25796]: System clock was stepped by 0.000000 seconds Oct 5 03:54:50 localhost python3[46308]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:51 localhost python3[46325]: ansible-timezone Invoked with name=UTC hwclock=None Oct 5 03:54:51 localhost systemd[1]: Starting Time & Date Service... Oct 5 03:54:51 localhost systemd[1]: Started Time & Date Service. Oct 5 03:54:52 localhost python3[46345]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:53 localhost python3[46362]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:53 localhost python3[46379]: ansible-slurp Invoked with src=/etc/tuned/active_profile Oct 5 03:54:54 localhost python3[46395]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:54:54 localhost python3[46411]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:55 localhost python3[46427]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:55 localhost python3[46475]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:56 localhost python3[46518]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650895.3554826-79435-137835616057642/source _original_basename=tmp35u3h_2l follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:56 localhost python3[46580]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:54:56 localhost python3[46623]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650896.2222247-79496-243952347329120/source _original_basename=tmpcvgzpxdc follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:57 localhost python3[46653]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 03:54:57 localhost systemd[1]: Reloading. Oct 5 03:54:57 localhost systemd-rc-local-generator[46677]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:54:57 localhost systemd-sysv-generator[46680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:54:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:54:58 localhost python3[46707]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:58 localhost python3[46723]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:58 localhost python3[46740]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:54:58 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Oct 5 03:54:59 localhost python3[46757]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:54:59 localhost python3[46773]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:54:59 localhost python3[46821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:55:00 localhost python3[46864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650899.568294-79677-9907280759635/source _original_basename=tmprwrc4dvb follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:55:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3391 writes, 16K keys, 3391 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3391 writes, 199 syncs, 17.04 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3391 writes, 16K keys, 3391 commit groups, 1.0 writes per commit group, ingest: 15.28 MB, 0.03 MB/s#012Interval WAL: 3391 writes, 199 syncs, 17.04 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 5 03:55:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 03:55:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 14.61 MB, 0.02 MB/s#012Interval WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 5 03:55:21 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 03:55:23 localhost python3[46896]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:23 localhost python3[46912]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Oct 5 03:55:24 localhost python3[46928]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:24 localhost python3[46944]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:25 localhost python3[46960]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:25 localhost python3[46976]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Oct 5 03:55:26 localhost kernel: SELinux: Converting 2706 SID table entries... Oct 5 03:55:26 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:55:26 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:55:26 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:55:26 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=13 res=1 Oct 5 03:55:26 localhost python3[46999]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:55:28 localhost python3[47136]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Oct 5 03:55:28 localhost rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Oct 5 03:55:28 localhost python3[47152]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:29 localhost python3[47168]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:29 localhost python3[47184]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Oct 5 03:55:34 localhost python3[47232]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:55:35 localhost python3[47275]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759650934.655927-81197-5436197763498/source _original_basename=tmpwi_rkz36 follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:55:35 localhost python3[47305]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:55:38 localhost python3[47428]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:55:39 localhost python3[47611]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 03:55:41 localhost systemd[35815]: Created slice User Background Tasks Slice. Oct 5 03:55:41 localhost systemd[35815]: Starting Cleanup of User's Temporary Files and Directories... Oct 5 03:55:41 localhost systemd[35815]: Finished Cleanup of User's Temporary Files and Directories. Oct 5 03:55:41 localhost python3[47643]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:55:43 localhost python3[47660]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 03:55:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost dbus-broker-launch[18324]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:46 localhost dbus-broker-launch[18324]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Oct 5 03:55:46 localhost dbus-broker-launch[18324]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Oct 5 03:55:46 localhost systemd[1]: Reexecuting. Oct 5 03:55:46 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 5 03:55:46 localhost systemd[1]: Detected virtualization kvm. Oct 5 03:55:46 localhost systemd[1]: Detected architecture x86-64. Oct 5 03:55:47 localhost systemd-sysv-generator[47716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:55:47 localhost systemd-rc-local-generator[47712]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:55:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:55:55 localhost kernel: SELinux: Converting 2706 SID table entries... Oct 5 03:55:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 03:55:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 03:55:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 03:55:55 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=14 res=1 Oct 5 03:55:55 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 03:55:56 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:55:56 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 03:55:56 localhost systemd[1]: Reloading. Oct 5 03:55:56 localhost systemd-rc-local-generator[47794]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:55:56 localhost systemd-sysv-generator[47799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:55:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:55:57 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 03:55:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:55:57 localhost systemd-journald[618]: Journal stopped Oct 5 03:55:57 localhost systemd-journald[618]: Received SIGTERM from PID 1 (systemd). Oct 5 03:55:57 localhost systemd[1]: Stopping Journal Service... Oct 5 03:55:57 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Oct 5 03:55:57 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Oct 5 03:55:57 localhost systemd[1]: Stopped Journal Service. Oct 5 03:55:57 localhost systemd[1]: systemd-journald.service: Consumed 1.656s CPU time. Oct 5 03:55:57 localhost systemd[1]: Starting Journal Service... Oct 5 03:55:57 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 5 03:55:57 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Oct 5 03:55:57 localhost systemd[1]: systemd-udevd.service: Consumed 3.169s CPU time. Oct 5 03:55:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Oct 5 03:55:57 localhost systemd-journald[48149]: Journal started Oct 5 03:55:57 localhost systemd-journald[48149]: Runtime Journal (/run/log/journal/19f34a97e4e878e70ef0e6e08186acc9) is 12.1M, max 314.7M, 302.6M free. Oct 5 03:55:57 localhost systemd[1]: Started Journal Service. Oct 5 03:55:57 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Oct 5 03:55:57 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 03:55:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:55:57 localhost systemd-udevd[48161]: Using default interface naming scheme 'rhel-9.0'. Oct 5 03:55:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Oct 5 03:55:57 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 03:55:57 localhost systemd[1]: Reloading. Oct 5 03:55:57 localhost systemd-rc-local-generator[48805]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:55:57 localhost systemd-sysv-generator[48808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:55:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:55:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 03:55:58 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 03:55:58 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 03:55:58 localhost systemd[1]: man-db-cache-update.service: Consumed 1.432s CPU time. Oct 5 03:55:58 localhost systemd[1]: run-rc4fb1ba4993f4565bdc7a430624dc005.service: Deactivated successfully. Oct 5 03:55:58 localhost systemd[1]: run-rd950b431e04747e49d39f447b6645852.service: Deactivated successfully. Oct 5 03:55:59 localhost python3[49160]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Oct 5 03:55:59 localhost python3[49179]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 03:56:00 localhost python3[49197]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:00 localhost python3[49197]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Oct 5 03:56:01 localhost python3[49197]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Oct 5 03:56:08 localhost podman[49209]: 2025-10-05 07:56:01.070366648 +0000 UTC m=+0.039611474 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:56:08 localhost python3[49197]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1571c200d626c35388c5864f613dd17fb1618f6192fe622da60a47fa61763c46 --format json Oct 5 03:56:09 localhost python3[49355]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:09 localhost python3[49355]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Oct 5 03:56:09 localhost python3[49355]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Oct 5 03:56:18 localhost podman[49368]: 2025-10-05 07:56:09.503728636 +0000 UTC m=+0.041687660 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 03:56:18 localhost python3[49355]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1e3eee8f9b979ec527f69dda079bc969bf9ddbe65c90f0543f3891d72e56a75e --format json Oct 5 03:56:18 localhost python3[49526]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:18 localhost python3[49526]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Oct 5 03:56:18 localhost python3[49526]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Oct 5 03:56:36 localhost podman[49538]: 2025-10-05 07:56:18.908378599 +0000 UTC m=+0.045777064 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 03:56:36 localhost python3[49526]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a56a2196ea2290002b5e3e60b4c440f2326e4f1173ca4d9c0a320716a756e568 --format json Oct 5 03:56:37 localhost python3[50080]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:37 localhost python3[50080]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Oct 5 03:56:37 localhost python3[50080]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Oct 5 03:56:41 localhost systemd[1]: tmp-crun.Px9QSl.mount: Deactivated successfully. Oct 5 03:56:41 localhost podman[50273]: 2025-10-05 07:56:41.604187883 +0000 UTC m=+0.076326261 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55) Oct 5 03:56:41 localhost podman[50273]: 2025-10-05 07:56:41.708291985 +0000 UTC m=+0.180430393 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, architecture=x86_64) Oct 5 03:56:50 localhost podman[50093]: 2025-10-05 07:56:37.158262912 +0000 UTC m=+0.040861273 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 03:56:50 localhost python3[50080]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 89ed729ad5d881399a0bbd370b8f3c39b84e5a87c6e02b0d1f2c943d2d9cfb7a --format json Oct 5 03:56:51 localhost python3[50543]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:51 localhost python3[50543]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Oct 5 03:56:51 localhost python3[50543]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Oct 5 03:56:59 localhost podman[50555]: 2025-10-05 07:56:51.21071001 +0000 UTC m=+0.044538097 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 03:56:59 localhost python3[50543]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a5e44a6280ab7a1da1b469cc214b40ecdad1d13f0c37c24f32cb45b40cce41d6 --format json Oct 5 03:56:59 localhost python3[50945]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:56:59 localhost python3[50945]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Oct 5 03:57:00 localhost python3[50945]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Oct 5 03:57:04 localhost podman[50958]: 2025-10-05 07:57:00.077559732 +0000 UTC m=+0.042186821 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 03:57:04 localhost python3[50945]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ef4308e71ba3950618e5de99f6c775558514a06fb9f6d93ca5c54d685a1349a6 --format json Oct 5 03:57:05 localhost python3[51080]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:05 localhost python3[51080]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Oct 5 03:57:05 localhost python3[51080]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Oct 5 03:57:08 localhost podman[51092]: 2025-10-05 07:57:05.407611265 +0000 UTC m=+0.033594596 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 03:57:08 localhost python3[51080]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 5b5e3dbf480a168d795a47e53d0695cd833f381ef10119a3de87e5946f6b53e5 --format json Oct 5 03:57:08 localhost python3[51216]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:08 localhost python3[51216]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Oct 5 03:57:08 localhost python3[51216]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Oct 5 03:57:11 localhost podman[51228]: 2025-10-05 07:57:08.778109975 +0000 UTC m=+0.038306761 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 03:57:11 localhost python3[51216]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 250768c493b95c1151e047902a648e6659ba35adb4c6e0af85c231937d0cc9b7 --format json Oct 5 03:57:12 localhost python3[51352]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:12 localhost python3[51352]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Oct 5 03:57:12 localhost python3[51352]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Oct 5 03:57:14 localhost podman[51364]: 2025-10-05 07:57:12.192676377 +0000 UTC m=+0.045061303 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 5 03:57:14 localhost python3[51352]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 68d3d3a77bfc9fce94ca9ce2b28076450b851f6f1e82e97fbe356ce4ab0f7849 --format json Oct 5 03:57:15 localhost python3[51487]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:15 localhost python3[51487]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Oct 5 03:57:15 localhost python3[51487]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Oct 5 03:57:20 localhost podman[51499]: 2025-10-05 07:57:15.307524646 +0000 UTC m=+0.043068085 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 03:57:20 localhost python3[51487]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 01fc8d861e2b923ef0bf1d5c40a269bd976b00e8a31e8c56d63f3504b82b1c76 --format json Oct 5 03:57:21 localhost python3[51741]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Oct 5 03:57:21 localhost python3[51741]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Oct 5 03:57:21 localhost python3[51741]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Oct 5 03:57:23 localhost podman[51753]: 2025-10-05 07:57:21.311269038 +0000 UTC m=+0.034340607 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 03:57:23 localhost python3[51741]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7f7fcb1a516a6191c7a8cb132a460e04d50ca4381f114f08dcbfe84340e49ac0 --format json Oct 5 03:57:24 localhost python3[51877]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:57:26 localhost ansible-async_wrapper.py[52049]: Invoked with 868438827701 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651045.496819-84080-115237087637930/AnsiballZ_command.py _ Oct 5 03:57:26 localhost ansible-async_wrapper.py[52052]: Starting module and watcher Oct 5 03:57:26 localhost ansible-async_wrapper.py[52052]: Start watching 52053 (3600) Oct 5 03:57:26 localhost ansible-async_wrapper.py[52053]: Start module (52053) Oct 5 03:57:26 localhost ansible-async_wrapper.py[52049]: Return async_wrapper task started. Oct 5 03:57:26 localhost python3[52073]: ansible-ansible.legacy.async_status Invoked with jid=868438827701.52049 mode=status _async_dir=/tmp/.ansible_async Oct 5 03:57:29 localhost puppet-user[52071]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:29 localhost puppet-user[52071]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:29 localhost puppet-user[52071]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:29 localhost puppet-user[52071]: (file & line not available) Oct 5 03:57:29 localhost puppet-user[52071]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:29 localhost puppet-user[52071]: (file & line not available) Oct 5 03:57:29 localhost puppet-user[52071]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 03:57:29 localhost puppet-user[52071]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 03:57:29 localhost puppet-user[52071]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.13 seconds Oct 5 03:57:29 localhost puppet-user[52071]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Oct 5 03:57:29 localhost puppet-user[52071]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Oct 5 03:57:30 localhost puppet-user[52071]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Oct 5 03:57:30 localhost puppet-user[52071]: Notice: Applied catalog in 0.31 seconds Oct 5 03:57:30 localhost puppet-user[52071]: Application: Oct 5 03:57:30 localhost puppet-user[52071]: Initial environment: production Oct 5 03:57:30 localhost puppet-user[52071]: Converged environment: production Oct 5 03:57:30 localhost puppet-user[52071]: Run mode: user Oct 5 03:57:30 localhost puppet-user[52071]: Changes: Oct 5 03:57:30 localhost puppet-user[52071]: Total: 3 Oct 5 03:57:30 localhost puppet-user[52071]: Events: Oct 5 03:57:30 localhost puppet-user[52071]: Success: 3 Oct 5 03:57:30 localhost puppet-user[52071]: Total: 3 Oct 5 03:57:30 localhost puppet-user[52071]: Resources: Oct 5 03:57:30 localhost puppet-user[52071]: Changed: 3 Oct 5 03:57:30 localhost puppet-user[52071]: Out of sync: 3 Oct 5 03:57:30 localhost puppet-user[52071]: Total: 10 Oct 5 03:57:30 localhost puppet-user[52071]: Time: Oct 5 03:57:30 localhost puppet-user[52071]: Schedule: 0.00 Oct 5 03:57:30 localhost puppet-user[52071]: File: 0.00 Oct 5 03:57:30 localhost puppet-user[52071]: Exec: 0.03 Oct 5 03:57:30 localhost puppet-user[52071]: Config retrieval: 0.17 Oct 5 03:57:30 localhost puppet-user[52071]: Augeas: 0.22 Oct 5 03:57:30 localhost puppet-user[52071]: Transaction evaluation: 0.26 Oct 5 03:57:30 localhost puppet-user[52071]: Catalog application: 0.31 Oct 5 03:57:30 localhost puppet-user[52071]: Last run: 1759651050 Oct 5 03:57:30 localhost puppet-user[52071]: Filebucket: 0.00 Oct 5 03:57:30 localhost puppet-user[52071]: Total: 0.31 Oct 5 03:57:30 localhost puppet-user[52071]: Version: Oct 5 03:57:30 localhost puppet-user[52071]: Config: 1759651049 Oct 5 03:57:30 localhost puppet-user[52071]: Puppet: 7.10.0 Oct 5 03:57:30 localhost ansible-async_wrapper.py[52053]: Module complete (52053) Oct 5 03:57:31 localhost ansible-async_wrapper.py[52052]: Done in kid B. Oct 5 03:57:36 localhost python3[52262]: ansible-ansible.legacy.async_status Invoked with jid=868438827701.52049 mode=status _async_dir=/tmp/.ansible_async Oct 5 03:57:37 localhost python3[52278]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:57:37 localhost python3[52294]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:57:38 localhost python3[52342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:57:38 localhost python3[52385]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651057.7545435-84316-189523994268542/source _original_basename=tmpf2p3_bc6 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 03:57:38 localhost python3[52415]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:39 localhost python3[52518]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 03:57:40 localhost python3[52537]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 03:57:40 localhost python3[52553]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005471152 step=1 update_config_hash_only=False Oct 5 03:57:41 localhost python3[52569]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:41 localhost python3[52585]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 03:57:42 localhost python3[52601]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 03:57:43 localhost python3[52643]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Oct 5 03:57:43 localhost podman[52827]: 2025-10-05 07:57:43.933867508 +0000 UTC m=+0.073136345 container create aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible) Oct 5 03:57:43 localhost systemd[1]: Started libpod-conmon-aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e.scope. Oct 5 03:57:43 localhost podman[52880]: 2025-10-05 07:57:43.984725112 +0000 UTC m=+0.060179540 container create 36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, release=2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 03:57:43 localhost systemd[1]: Started libcrun container. Oct 5 03:57:43 localhost podman[52827]: 2025-10-05 07:57:43.896265389 +0000 UTC m=+0.035534226 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 03:57:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5669f7c33f2cbae2c100aead59c5bc55d637c9fe9224f3ab6a48af0ed1c37483/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost podman[52881]: 2025-10-05 07:57:44.002419896 +0000 UTC m=+0.072102586 container create 90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, container_name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, config_id=tripleo_puppet_step1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 03:57:44 localhost podman[52827]: 2025-10-05 07:57:44.008091612 +0000 UTC m=+0.147360449 container init aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, release=2, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T14:56:59, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 03:57:44 localhost podman[52849]: 2025-10-05 07:57:43.919890915 +0000 UTC m=+0.038748772 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 03:57:44 localhost podman[52827]: 2025-10-05 07:57:44.020647036 +0000 UTC m=+0.159915873 container start aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=2, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, tcib_managed=true, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, container_name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9) Oct 5 03:57:44 localhost systemd[1]: Started libpod-conmon-36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491.scope. Oct 5 03:57:44 localhost podman[52827]: 2025-10-05 07:57:44.022891128 +0000 UTC m=+0.162159965 container attach aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=2, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, container_name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 03:57:44 localhost systemd[1]: Started libcrun container. Oct 5 03:57:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae54ce9a5138d7aabeb9eaabe0dcb4afb1a3468b56e9908af6f1efab9669523/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:44 localhost podman[52880]: 2025-10-05 07:57:44.043160503 +0000 UTC m=+0.118614931 container init 36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, io.openshift.expose-services=, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=container-puppet-collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12) Oct 5 03:57:44 localhost podman[52880]: 2025-10-05 07:57:43.954697999 +0000 UTC m=+0.030152427 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 03:57:44 localhost podman[52881]: 2025-10-05 07:57:43.96420857 +0000 UTC m=+0.033891240 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:57:44 localhost podman[52880]: 2025-10-05 07:57:44.992122636 +0000 UTC m=+1.067577104 container start 36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, release=2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_puppet_step1, io.openshift.expose-services=, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=container-puppet-collectd) Oct 5 03:57:44 localhost podman[52880]: 2025-10-05 07:57:44.992452905 +0000 UTC m=+1.067907373 container attach 36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, container_name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 03:57:45 localhost systemd[1]: Started libpod-conmon-90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe.scope. Oct 5 03:57:45 localhost podman[52849]: 2025-10-05 07:57:45.07147462 +0000 UTC m=+1.190332497 container create 67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, config_id=tripleo_puppet_step1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=container-puppet-iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, architecture=x86_64) Oct 5 03:57:45 localhost systemd[1]: Started libcrun container. Oct 5 03:57:45 localhost podman[52876]: 2025-10-05 07:57:45.015830696 +0000 UTC m=+1.090504643 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 03:57:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a737dba04724aec001e9e6bcf76377258454853b5287a5bc8d87a57a3463c09/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:45 localhost podman[52876]: 2025-10-05 07:57:45.121837531 +0000 UTC m=+1.196511418 container create 04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, tcib_managed=true, name=rhosp17/openstack-cron, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1) Oct 5 03:57:45 localhost systemd[1]: Started libpod-conmon-67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39.scope. Oct 5 03:57:45 localhost podman[52881]: 2025-10-05 07:57:45.128125483 +0000 UTC m=+1.197808173 container init 90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, container_name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1) Oct 5 03:57:45 localhost systemd[1]: Started libcrun container. Oct 5 03:57:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb7ced4bd7bd74e0a0f4ec2a0694dfa6707df5fca3b6302a69516f93b64f08f/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb7ced4bd7bd74e0a0f4ec2a0694dfa6707df5fca3b6302a69516f93b64f08f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:45 localhost systemd[1]: Started libpod-conmon-04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7.scope. Oct 5 03:57:45 localhost podman[52849]: 2025-10-05 07:57:45.183383737 +0000 UTC m=+1.302241604 container init 67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, container_name=container-puppet-iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, version=17.1.9) Oct 5 03:57:45 localhost systemd[1]: Started libcrun container. Oct 5 03:57:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfeb5e97bc5c93c6dd9c6b5d4562ebcdbcb5141c059d0f33a6487f50c5da8817/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:45 localhost podman[52849]: 2025-10-05 07:57:45.255876024 +0000 UTC m=+1.374733901 container start 67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, container_name=container-puppet-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_puppet_step1, io.openshift.expose-services=) Oct 5 03:57:45 localhost podman[52849]: 2025-10-05 07:57:45.256539872 +0000 UTC m=+1.375397769 container attach 67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.9, architecture=x86_64, config_id=tripleo_puppet_step1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, container_name=container-puppet-iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 03:57:45 localhost podman[52881]: 2025-10-05 07:57:45.299867689 +0000 UTC m=+1.369550339 container start 90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, distribution-scope=public, release=1, container_name=container-puppet-metrics_qdr, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:57:45 localhost podman[52881]: 2025-10-05 07:57:45.300121446 +0000 UTC m=+1.369804186 container attach 90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=container-puppet-metrics_qdr) Oct 5 03:57:45 localhost podman[52876]: 2025-10-05 07:57:45.37183006 +0000 UTC m=+1.446503947 container init 04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 03:57:45 localhost podman[52876]: 2025-10-05 07:57:45.388144598 +0000 UTC m=+1.462818485 container start 04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vcs-type=git) Oct 5 03:57:45 localhost podman[52876]: 2025-10-05 07:57:45.38858566 +0000 UTC m=+1.463259557 container attach 04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-crond, io.openshift.expose-services=, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 03:57:46 localhost ovs-vsctl[53170]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Oct 5 03:57:46 localhost puppet-user[52964]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:46 localhost puppet-user[52964]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:46 localhost puppet-user[52964]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:46 localhost puppet-user[52964]: (file & line not available) Oct 5 03:57:46 localhost puppet-user[52964]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:46 localhost puppet-user[52964]: (file & line not available) Oct 5 03:57:46 localhost puppet-user[52962]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:46 localhost puppet-user[52962]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:46 localhost puppet-user[52962]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:46 localhost puppet-user[52962]: (file & line not available) Oct 5 03:57:46 localhost puppet-user[52962]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:46 localhost puppet-user[52962]: (file & line not available) Oct 5 03:57:46 localhost podman[52749]: 2025-10-05 07:57:43.808150733 +0000 UTC m=+0.031831192 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 5 03:57:47 localhost puppet-user[53008]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[53008]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[53008]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[53008]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[53023]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[53023]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[53023]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[53023]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[53008]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[53008]: (file & line not available) Oct 5 03:57:47 localhost podman[53404]: 2025-10-05 07:57:47.079757141 +0000 UTC m=+0.068902200 container create f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1, build-date=2025-07-21T14:49:23, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-central-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, architecture=x86_64, distribution-scope=public) Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Oct 5 03:57:47 localhost puppet-user[52962]: in a future release. Use nova::cinder::os_region_name instead Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Oct 5 03:57:47 localhost puppet-user[52962]: in a future release. Use nova::cinder::catalog_info instead Oct 5 03:57:47 localhost puppet-user[53008]: Notice: Accepting previously invalid value for target type 'Integer' Oct 5 03:57:47 localhost puppet-user[53023]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[53023]: (file & line not available) Oct 5 03:57:47 localhost systemd[1]: Started libpod-conmon-f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566.scope. Oct 5 03:57:47 localhost systemd[1]: Started libcrun container. Oct 5 03:57:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df95372cdfa3047b33cd0040d0663ba9895a7edf8e92f134854350b1276dcf4/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:47 localhost puppet-user[52964]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.31 seconds Oct 5 03:57:47 localhost puppet-user[53008]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.12 seconds Oct 5 03:57:47 localhost podman[53404]: 2025-10-05 07:57:47.129306779 +0000 UTC m=+0.118451848 container init f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, config_id=tripleo_puppet_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, container_name=container-puppet-ceilometer, vcs-type=git, build-date=2025-07-21T14:49:23, com.redhat.component=openstack-ceilometer-central-container, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-central, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1) Oct 5 03:57:47 localhost podman[53404]: 2025-10-05 07:57:47.136700961 +0000 UTC m=+0.125846030 container start f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.33.12, vcs-type=git, container_name=container-puppet-ceilometer, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, name=rhosp17/openstack-ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T14:49:23, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, batch=17.1_20250721.1, release=1, io.openshift.expose-services=) Oct 5 03:57:47 localhost podman[53404]: 2025-10-05 07:57:47.137009479 +0000 UTC m=+0.126154528 container attach f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, name=rhosp17/openstack-ceilometer-central, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1, com.redhat.component=openstack-ceilometer-central-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:23) Oct 5 03:57:47 localhost podman[53404]: 2025-10-05 07:57:47.046263203 +0000 UTC m=+0.035408282 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Oct 5 03:57:47 localhost puppet-user[53023]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.13 seconds Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Oct 5 03:57:47 localhost puppet-user[53039]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:47 localhost puppet-user[53039]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:47 localhost puppet-user[53039]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:47 localhost puppet-user[53039]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[53039]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:47 localhost puppet-user[53039]: (file & line not available) Oct 5 03:57:47 localhost puppet-user[53023]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Oct 5 03:57:47 localhost puppet-user[53023]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Oct 5 03:57:47 localhost puppet-user[53039]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.08 seconds Oct 5 03:57:47 localhost puppet-user[53023]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}e6d46f4244129aadfc7d53b9b6f9c1368836ebb2ca837cf6800c3c590d78c6f3' Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Oct 5 03:57:47 localhost puppet-user[53008]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Oct 5 03:57:47 localhost puppet-user[53008]: Notice: Applied catalog in 0.10 seconds Oct 5 03:57:47 localhost puppet-user[53008]: Application: Oct 5 03:57:47 localhost puppet-user[53008]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[53008]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[53008]: Run mode: user Oct 5 03:57:47 localhost puppet-user[53008]: Changes: Oct 5 03:57:47 localhost puppet-user[53008]: Total: 7 Oct 5 03:57:47 localhost puppet-user[53008]: Events: Oct 5 03:57:47 localhost puppet-user[53008]: Success: 7 Oct 5 03:57:47 localhost puppet-user[53008]: Total: 7 Oct 5 03:57:47 localhost puppet-user[53008]: Resources: Oct 5 03:57:47 localhost puppet-user[53008]: Skipped: 13 Oct 5 03:57:47 localhost puppet-user[53008]: Changed: 5 Oct 5 03:57:47 localhost puppet-user[53008]: Out of sync: 5 Oct 5 03:57:47 localhost puppet-user[53008]: Total: 20 Oct 5 03:57:47 localhost puppet-user[53008]: Time: Oct 5 03:57:47 localhost puppet-user[53008]: File: 0.08 Oct 5 03:57:47 localhost puppet-user[53008]: Transaction evaluation: 0.10 Oct 5 03:57:47 localhost puppet-user[53008]: Catalog application: 0.10 Oct 5 03:57:47 localhost puppet-user[53008]: Config retrieval: 0.16 Oct 5 03:57:47 localhost puppet-user[53008]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[53008]: Total: 0.10 Oct 5 03:57:47 localhost puppet-user[53008]: Version: Oct 5 03:57:47 localhost puppet-user[53008]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[53008]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[53039]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Oct 5 03:57:47 localhost puppet-user[53039]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Oct 5 03:57:47 localhost puppet-user[53039]: Notice: Applied catalog in 0.04 seconds Oct 5 03:57:47 localhost puppet-user[53039]: Application: Oct 5 03:57:47 localhost puppet-user[53039]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[53039]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[53039]: Run mode: user Oct 5 03:57:47 localhost puppet-user[53039]: Changes: Oct 5 03:57:47 localhost puppet-user[53039]: Total: 2 Oct 5 03:57:47 localhost puppet-user[53039]: Events: Oct 5 03:57:47 localhost puppet-user[53039]: Success: 2 Oct 5 03:57:47 localhost puppet-user[53039]: Total: 2 Oct 5 03:57:47 localhost puppet-user[53039]: Resources: Oct 5 03:57:47 localhost puppet-user[53039]: Changed: 2 Oct 5 03:57:47 localhost puppet-user[53039]: Out of sync: 2 Oct 5 03:57:47 localhost puppet-user[53039]: Skipped: 7 Oct 5 03:57:47 localhost puppet-user[53039]: Total: 9 Oct 5 03:57:47 localhost puppet-user[53039]: Time: Oct 5 03:57:47 localhost puppet-user[53039]: File: 0.00 Oct 5 03:57:47 localhost puppet-user[53039]: Cron: 0.01 Oct 5 03:57:47 localhost puppet-user[53039]: Transaction evaluation: 0.03 Oct 5 03:57:47 localhost puppet-user[53039]: Catalog application: 0.04 Oct 5 03:57:47 localhost puppet-user[53039]: Config retrieval: 0.10 Oct 5 03:57:47 localhost puppet-user[53039]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[53039]: Total: 0.04 Oct 5 03:57:47 localhost puppet-user[53039]: Version: Oct 5 03:57:47 localhost puppet-user[53039]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[53039]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Oct 5 03:57:47 localhost puppet-user[52964]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Oct 5 03:57:47 localhost puppet-user[52964]: Notice: Applied catalog in 0.24 seconds Oct 5 03:57:47 localhost puppet-user[52964]: Application: Oct 5 03:57:47 localhost puppet-user[52964]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[52964]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[52964]: Run mode: user Oct 5 03:57:47 localhost puppet-user[52964]: Changes: Oct 5 03:57:47 localhost puppet-user[52964]: Total: 43 Oct 5 03:57:47 localhost puppet-user[52964]: Events: Oct 5 03:57:47 localhost puppet-user[52964]: Success: 43 Oct 5 03:57:47 localhost puppet-user[52964]: Total: 43 Oct 5 03:57:47 localhost puppet-user[52964]: Resources: Oct 5 03:57:47 localhost puppet-user[52964]: Skipped: 14 Oct 5 03:57:47 localhost puppet-user[52964]: Changed: 38 Oct 5 03:57:47 localhost puppet-user[52964]: Out of sync: 38 Oct 5 03:57:47 localhost puppet-user[52964]: Total: 82 Oct 5 03:57:47 localhost puppet-user[52964]: Time: Oct 5 03:57:47 localhost puppet-user[52964]: Concat fragment: 0.00 Oct 5 03:57:47 localhost puppet-user[52964]: File: 0.13 Oct 5 03:57:47 localhost puppet-user[52964]: Transaction evaluation: 0.23 Oct 5 03:57:47 localhost puppet-user[52964]: Catalog application: 0.24 Oct 5 03:57:47 localhost puppet-user[52964]: Config retrieval: 0.44 Oct 5 03:57:47 localhost puppet-user[52964]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[52964]: Concat file: 0.00 Oct 5 03:57:47 localhost puppet-user[52964]: Total: 0.24 Oct 5 03:57:47 localhost puppet-user[52964]: Version: Oct 5 03:57:47 localhost puppet-user[52964]: Config: 1759651066 Oct 5 03:57:47 localhost puppet-user[52964]: Puppet: 7.10.0 Oct 5 03:57:47 localhost puppet-user[52962]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Oct 5 03:57:47 localhost systemd[1]: libpod-04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7.scope: Deactivated successfully. Oct 5 03:57:47 localhost systemd[1]: libpod-04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7.scope: Consumed 2.024s CPU time. Oct 5 03:57:47 localhost podman[52876]: 2025-10-05 07:57:47.569274244 +0000 UTC m=+3.643948191 container died 04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-type=git, container_name=container-puppet-crond, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_puppet_step1, tcib_managed=true, io.openshift.expose-services=) Oct 5 03:57:47 localhost systemd[1]: libpod-90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe.scope: Deactivated successfully. Oct 5 03:57:47 localhost systemd[1]: libpod-90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe.scope: Consumed 2.164s CPU time. Oct 5 03:57:47 localhost puppet-user[53023]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Oct 5 03:57:47 localhost podman[52881]: 2025-10-05 07:57:47.625604588 +0000 UTC m=+3.695287238 container died 90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, build-date=2025-07-21T13:07:59, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, container_name=container-puppet-metrics_qdr, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public) Oct 5 03:57:47 localhost puppet-user[53023]: Notice: Applied catalog in 0.45 seconds Oct 5 03:57:47 localhost puppet-user[53023]: Application: Oct 5 03:57:47 localhost puppet-user[53023]: Initial environment: production Oct 5 03:57:47 localhost puppet-user[53023]: Converged environment: production Oct 5 03:57:47 localhost puppet-user[53023]: Run mode: user Oct 5 03:57:47 localhost puppet-user[53023]: Changes: Oct 5 03:57:47 localhost puppet-user[53023]: Total: 4 Oct 5 03:57:47 localhost puppet-user[53023]: Events: Oct 5 03:57:47 localhost puppet-user[53023]: Success: 4 Oct 5 03:57:47 localhost puppet-user[53023]: Total: 4 Oct 5 03:57:47 localhost puppet-user[53023]: Resources: Oct 5 03:57:47 localhost puppet-user[53023]: Changed: 4 Oct 5 03:57:47 localhost puppet-user[53023]: Out of sync: 4 Oct 5 03:57:47 localhost puppet-user[53023]: Skipped: 8 Oct 5 03:57:47 localhost puppet-user[53023]: Total: 13 Oct 5 03:57:47 localhost puppet-user[53023]: Time: Oct 5 03:57:47 localhost puppet-user[53023]: File: 0.00 Oct 5 03:57:47 localhost puppet-user[53023]: Exec: 0.04 Oct 5 03:57:47 localhost puppet-user[53023]: Config retrieval: 0.16 Oct 5 03:57:47 localhost puppet-user[53023]: Augeas: 0.39 Oct 5 03:57:47 localhost puppet-user[53023]: Transaction evaluation: 0.44 Oct 5 03:57:47 localhost puppet-user[53023]: Catalog application: 0.45 Oct 5 03:57:47 localhost puppet-user[53023]: Last run: 1759651067 Oct 5 03:57:47 localhost puppet-user[53023]: Total: 0.45 Oct 5 03:57:47 localhost puppet-user[53023]: Version: Oct 5 03:57:47 localhost puppet-user[53023]: Config: 1759651067 Oct 5 03:57:47 localhost puppet-user[53023]: Puppet: 7.10.0 Oct 5 03:57:47 localhost podman[53579]: 2025-10-05 07:57:47.727182851 +0000 UTC m=+0.144683676 container cleanup 04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=container-puppet-crond, distribution-scope=public, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:07:52) Oct 5 03:57:47 localhost systemd[1]: libpod-conmon-04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7.scope: Deactivated successfully. Oct 5 03:57:47 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 03:57:47 localhost podman[53580]: 2025-10-05 07:57:47.746205462 +0000 UTC m=+0.164403336 container cleanup 90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_id=tripleo_puppet_step1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 03:57:47 localhost systemd[1]: libpod-conmon-90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe.scope: Deactivated successfully. Oct 5 03:57:47 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:57:47 localhost systemd[1]: libpod-36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491.scope: Deactivated successfully. Oct 5 03:57:47 localhost systemd[1]: libpod-36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491.scope: Consumed 2.593s CPU time. Oct 5 03:57:47 localhost podman[52880]: 2025-10-05 07:57:47.870341974 +0000 UTC m=+3.945796402 container died 36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:04:03, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc.) Oct 5 03:57:47 localhost systemd[1]: libpod-67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39.scope: Deactivated successfully. Oct 5 03:57:47 localhost systemd[1]: libpod-67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39.scope: Consumed 2.557s CPU time. Oct 5 03:57:47 localhost podman[53692]: 2025-10-05 07:57:47.994102935 +0000 UTC m=+0.118426146 container cleanup 36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, release=2, vcs-type=git, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, container_name=container-puppet-collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, distribution-scope=public, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container) Oct 5 03:57:47 localhost systemd[1]: libpod-conmon-36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491.scope: Deactivated successfully. Oct 5 03:57:48 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 03:57:48 localhost podman[52849]: 2025-10-05 07:57:48.04318619 +0000 UTC m=+4.162044047 container died 67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=container-puppet-iscsid, vendor=Red Hat, Inc., release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, config_id=tripleo_puppet_step1, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 03:57:48 localhost systemd[1]: tmp-crun.VpKfL9.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-dfeb5e97bc5c93c6dd9c6b5d4562ebcdbcb5141c059d0f33a6487f50c5da8817-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04d301f272f0e2860d2e1dce4176ee395dcd8a34e52cd2613be4fd3cf9bb51b7-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-7a737dba04724aec001e9e6bcf76377258454853b5287a5bc8d87a57a3463c09-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90ec92a16f930affceb8b0cd497f3eb7efd6e24344cb75c1de5b49ae03d730fe-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-8ae54ce9a5138d7aabeb9eaabe0dcb4afb1a3468b56e9908af6f1efab9669523-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:48 localhost systemd[1]: var-lib-containers-storage-overlay-bcb7ced4bd7bd74e0a0f4ec2a0694dfa6707df5fca3b6302a69516f93b64f08f-merged.mount: Deactivated successfully. Oct 5 03:57:48 localhost puppet-user[52962]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 1.27 seconds Oct 5 03:57:48 localhost podman[53755]: 2025-10-05 07:57:48.179008611 +0000 UTC m=+0.171212042 container cleanup 67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, container_name=container-puppet-iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, tcib_managed=true) Oct 5 03:57:48 localhost systemd[1]: libpod-conmon-67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39.scope: Deactivated successfully. Oct 5 03:57:48 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 03:57:48 localhost podman[53821]: 2025-10-05 07:57:48.195905494 +0000 UTC m=+0.077500564 container create 293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, architecture=x86_64) Oct 5 03:57:48 localhost podman[53836]: 2025-10-05 07:57:48.224747835 +0000 UTC m=+0.079848309 container create 0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, container_name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_puppet_step1, version=17.1.9, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T12:58:40, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 03:57:48 localhost systemd[1]: Started libpod-conmon-293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d.scope. Oct 5 03:57:48 localhost systemd[1]: Started libcrun container. Oct 5 03:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18dc2747c1d228beeff09121da02d0b7f69981323209f5388a795a036816caf/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18dc2747c1d228beeff09121da02d0b7f69981323209f5388a795a036816caf/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:48 localhost systemd[1]: Started libpod-conmon-0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f.scope. Oct 5 03:57:48 localhost podman[53821]: 2025-10-05 07:57:48.249141983 +0000 UTC m=+0.130737043 container init 293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, container_name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=) Oct 5 03:57:48 localhost systemd[1]: Started libcrun container. Oct 5 03:57:48 localhost podman[53821]: 2025-10-05 07:57:48.256902886 +0000 UTC m=+0.138497966 container start 293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.buildah.version=1.33.12, container_name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Oct 5 03:57:48 localhost podman[53821]: 2025-10-05 07:57:48.25706779 +0000 UTC m=+0.138662860 container attach 293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, version=17.1.9, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_id=tripleo_puppet_step1, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 03:57:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd611533d0754c2b8855ffa9aefaf86645bfbe47724a0d11fe20ac2f596fdd7a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:48 localhost podman[53821]: 2025-10-05 07:57:48.163595729 +0000 UTC m=+0.045190819 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 03:57:48 localhost podman[53836]: 2025-10-05 07:57:48.269091601 +0000 UTC m=+0.124192055 container init 0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, version=17.1.9, config_id=tripleo_puppet_step1, tcib_managed=true, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, release=1) Oct 5 03:57:48 localhost podman[53836]: 2025-10-05 07:57:48.174963721 +0000 UTC m=+0.030064205 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 03:57:48 localhost podman[53836]: 2025-10-05 07:57:48.278369764 +0000 UTC m=+0.133470238 container start 0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-rsyslog-container, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167) Oct 5 03:57:48 localhost podman[53836]: 2025-10-05 07:57:48.278708474 +0000 UTC m=+0.133809028 container attach 0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-rsyslog-container, release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T12:58:40, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64) Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}8f8307e6752131cfe7b76229011dc2c20353b7703527f4239dafa25c131174e7' Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Oct 5 03:57:48 localhost puppet-user[52962]: Warning: Empty environment setting 'TLS_PASSWORD' Oct 5 03:57:48 localhost puppet-user[52962]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}18702261db115bd07cacc9444f9a28a0592c863061b61e41d99fc113ec9c38a8' Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Oct 5 03:57:48 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:49 localhost puppet-user[53508]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:49 localhost puppet-user[53508]: (file & line not available) Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:49 localhost puppet-user[53508]: (file & line not available) Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Oct 5 03:57:49 localhost puppet-user[53508]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.37 seconds Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Oct 5 03:57:49 localhost puppet-user[53508]: Notice: Applied catalog in 0.38 seconds Oct 5 03:57:49 localhost puppet-user[53508]: Application: Oct 5 03:57:49 localhost puppet-user[53508]: Initial environment: production Oct 5 03:57:49 localhost puppet-user[53508]: Converged environment: production Oct 5 03:57:49 localhost puppet-user[53508]: Run mode: user Oct 5 03:57:49 localhost puppet-user[53508]: Changes: Oct 5 03:57:49 localhost puppet-user[53508]: Total: 31 Oct 5 03:57:49 localhost puppet-user[53508]: Events: Oct 5 03:57:49 localhost puppet-user[53508]: Success: 31 Oct 5 03:57:49 localhost puppet-user[53508]: Total: 31 Oct 5 03:57:49 localhost puppet-user[53508]: Resources: Oct 5 03:57:49 localhost puppet-user[53508]: Skipped: 22 Oct 5 03:57:49 localhost puppet-user[53508]: Changed: 31 Oct 5 03:57:49 localhost puppet-user[53508]: Out of sync: 31 Oct 5 03:57:49 localhost puppet-user[53508]: Total: 151 Oct 5 03:57:49 localhost puppet-user[53508]: Time: Oct 5 03:57:49 localhost puppet-user[53508]: Package: 0.02 Oct 5 03:57:49 localhost puppet-user[53508]: Ceilometer config: 0.29 Oct 5 03:57:49 localhost puppet-user[53508]: Transaction evaluation: 0.37 Oct 5 03:57:49 localhost puppet-user[53508]: Catalog application: 0.38 Oct 5 03:57:49 localhost puppet-user[53508]: Config retrieval: 0.44 Oct 5 03:57:49 localhost puppet-user[53508]: Last run: 1759651069 Oct 5 03:57:49 localhost puppet-user[53508]: Resources: 0.00 Oct 5 03:57:49 localhost puppet-user[53508]: Total: 0.38 Oct 5 03:57:49 localhost puppet-user[53508]: Version: Oct 5 03:57:49 localhost puppet-user[53508]: Config: 1759651068 Oct 5 03:57:49 localhost puppet-user[53508]: Puppet: 7.10.0 Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Oct 5 03:57:49 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:50 localhost puppet-user[53917]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:50 localhost puppet-user[53917]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:50 localhost puppet-user[53917]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:50 localhost puppet-user[53917]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Oct 5 03:57:50 localhost puppet-user[53927]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:50 localhost puppet-user[53927]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:50 localhost puppet-user[53927]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:50 localhost puppet-user[53927]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Oct 5 03:57:50 localhost puppet-user[53927]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:50 localhost puppet-user[53927]: (file & line not available) Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}6626454871e6a8692d81b09b17969f804e05d0cbab5d6267f02be7b89a45b6ba' Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Oct 5 03:57:50 localhost systemd[1]: libpod-f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566.scope: Deactivated successfully. Oct 5 03:57:50 localhost systemd[1]: libpod-f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566.scope: Consumed 2.844s CPU time. Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Oct 5 03:57:50 localhost podman[53404]: 2025-10-05 07:57:50.183408195 +0000 UTC m=+3.172553304 container died f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=container-puppet-ceilometer, version=17.1.9, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, architecture=x86_64, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-07-21T14:49:23, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.21 seconds Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Oct 5 03:57:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Oct 5 03:57:50 localhost systemd[1]: var-lib-containers-storage-overlay-8df95372cdfa3047b33cd0040d0663ba9895a7edf8e92f134854350b1276dcf4-merged.mount: Deactivated successfully. Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54221]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Oct 5 03:57:50 localhost podman[54185]: 2025-10-05 07:57:50.306904889 +0000 UTC m=+0.110771637 container cleanup f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, distribution-scope=public, container_name=container-puppet-ceilometer, release=1, build-date=2025-07-21T14:49:23, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-central/images/17.1.9-1, com.redhat.component=openstack-ceilometer-central-container, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=1ce3db7211bdafb9cc5e59a103488bd6a8dc3f2f, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.33.12) Oct 5 03:57:50 localhost systemd[1]: libpod-conmon-f81f6322584f1663fee99ffc3e0390e790a8f28f1a3c6a6850f0294185551566.scope: Deactivated successfully. Oct 5 03:57:50 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54253]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Oct 5 03:57:50 localhost puppet-user[53927]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.29 seconds Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54265]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108 Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54277]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005471152.localdomain Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005471152.novalocal' to 'np0005471152.localdomain' Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54281]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54283]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Oct 5 03:57:50 localhost puppet-user[53927]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[53927]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54285]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Oct 5 03:57:50 localhost puppet-user[53927]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}f5efb4f36b8d9610fb33696fa20669891a95704c675a0b3e3a955dcc1341c3bf' Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost puppet-user[53927]: Notice: Applied catalog in 0.11 seconds Oct 5 03:57:50 localhost puppet-user[53927]: Application: Oct 5 03:57:50 localhost puppet-user[53927]: Initial environment: production Oct 5 03:57:50 localhost puppet-user[53927]: Converged environment: production Oct 5 03:57:50 localhost puppet-user[53927]: Run mode: user Oct 5 03:57:50 localhost puppet-user[53927]: Changes: Oct 5 03:57:50 localhost puppet-user[53927]: Total: 3 Oct 5 03:57:50 localhost puppet-user[53927]: Events: Oct 5 03:57:50 localhost puppet-user[53927]: Success: 3 Oct 5 03:57:50 localhost puppet-user[53927]: Total: 3 Oct 5 03:57:50 localhost puppet-user[53927]: Resources: Oct 5 03:57:50 localhost puppet-user[53927]: Skipped: 11 Oct 5 03:57:50 localhost puppet-user[53927]: Changed: 3 Oct 5 03:57:50 localhost puppet-user[53927]: Out of sync: 3 Oct 5 03:57:50 localhost puppet-user[53927]: Total: 25 Oct 5 03:57:50 localhost puppet-user[53927]: Time: Oct 5 03:57:50 localhost puppet-user[53927]: Concat file: 0.00 Oct 5 03:57:50 localhost puppet-user[53927]: Concat fragment: 0.00 Oct 5 03:57:50 localhost puppet-user[53927]: File: 0.02 Oct 5 03:57:50 localhost puppet-user[53927]: Transaction evaluation: 0.11 Oct 5 03:57:50 localhost puppet-user[53927]: Catalog application: 0.11 Oct 5 03:57:50 localhost puppet-user[53927]: Config retrieval: 0.35 Oct 5 03:57:50 localhost puppet-user[53927]: Last run: 1759651070 Oct 5 03:57:50 localhost puppet-user[53927]: Total: 0.11 Oct 5 03:57:50 localhost puppet-user[53927]: Version: Oct 5 03:57:50 localhost puppet-user[53927]: Config: 1759651070 Oct 5 03:57:50 localhost puppet-user[53927]: Puppet: 7.10.0 Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54287]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54289]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54297]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54300]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:37:11:08 Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54304]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54306]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Oct 5 03:57:50 localhost ovs-vsctl[54309]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Oct 5 03:57:50 localhost puppet-user[53917]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Oct 5 03:57:50 localhost puppet-user[53917]: Notice: Applied catalog in 0.56 seconds Oct 5 03:57:50 localhost puppet-user[53917]: Application: Oct 5 03:57:50 localhost puppet-user[53917]: Initial environment: production Oct 5 03:57:50 localhost puppet-user[53917]: Converged environment: production Oct 5 03:57:50 localhost puppet-user[53917]: Run mode: user Oct 5 03:57:50 localhost puppet-user[53917]: Changes: Oct 5 03:57:50 localhost puppet-user[53917]: Total: 14 Oct 5 03:57:50 localhost puppet-user[53917]: Events: Oct 5 03:57:50 localhost puppet-user[53917]: Success: 14 Oct 5 03:57:50 localhost puppet-user[53917]: Total: 14 Oct 5 03:57:50 localhost puppet-user[53917]: Resources: Oct 5 03:57:50 localhost puppet-user[53917]: Skipped: 12 Oct 5 03:57:50 localhost puppet-user[53917]: Changed: 14 Oct 5 03:57:50 localhost puppet-user[53917]: Out of sync: 14 Oct 5 03:57:50 localhost puppet-user[53917]: Total: 29 Oct 5 03:57:50 localhost puppet-user[53917]: Time: Oct 5 03:57:50 localhost puppet-user[53917]: Exec: 0.02 Oct 5 03:57:50 localhost puppet-user[53917]: Config retrieval: 0.24 Oct 5 03:57:50 localhost puppet-user[53917]: Vs config: 0.49 Oct 5 03:57:50 localhost puppet-user[53917]: Transaction evaluation: 0.54 Oct 5 03:57:50 localhost puppet-user[53917]: Catalog application: 0.56 Oct 5 03:57:50 localhost puppet-user[53917]: Last run: 1759651070 Oct 5 03:57:50 localhost puppet-user[53917]: Total: 0.56 Oct 5 03:57:50 localhost puppet-user[53917]: Version: Oct 5 03:57:50 localhost puppet-user[53917]: Config: 1759651070 Oct 5 03:57:50 localhost puppet-user[53917]: Puppet: 7.10.0 Oct 5 03:57:50 localhost systemd[1]: libpod-0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f.scope: Deactivated successfully. Oct 5 03:57:50 localhost systemd[1]: libpod-0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f.scope: Consumed 2.378s CPU time. Oct 5 03:57:50 localhost podman[53836]: 2025-10-05 07:57:50.848981052 +0000 UTC m=+2.704081526 container died 0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, container_name=container-puppet-rsyslog, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, config_id=tripleo_puppet_step1, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 03:57:50 localhost podman[54338]: 2025-10-05 07:57:50.957708672 +0000 UTC m=+0.096454454 container cleanup 0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, config_id=tripleo_puppet_step1, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 03:57:50 localhost systemd[1]: libpod-conmon-0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f.scope: Deactivated successfully. Oct 5 03:57:50 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Oct 5 03:57:51 localhost systemd[1]: tmp-crun.lnThTa.mount: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: var-lib-containers-storage-overlay-fd611533d0754c2b8855ffa9aefaf86645bfbe47724a0d11fe20ac2f596fdd7a-merged.mount: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: libpod-293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d.scope: Deactivated successfully. Oct 5 03:57:51 localhost systemd[1]: libpod-293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d.scope: Consumed 2.943s CPU time. Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Oct 5 03:57:51 localhost podman[53821]: 2025-10-05 07:57:51.632272896 +0000 UTC m=+3.513867986 container died 293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, config_id=tripleo_puppet_step1, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Oct 5 03:57:51 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Oct 5 03:57:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:52 localhost systemd[1]: var-lib-containers-storage-overlay-d18dc2747c1d228beeff09121da02d0b7f69981323209f5388a795a036816caf-merged.mount: Deactivated successfully. Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Oct 5 03:57:52 localhost podman[54420]: 2025-10-05 07:57:52.747730721 +0000 UTC m=+1.353967671 container cleanup 293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 03:57:52 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 03:57:52 localhost systemd[1]: libpod-conmon-293e5c720d94341f2aa49eff6385b0f1619b8f656c1d2c6d90393ca80f79e07d.scope: Deactivated successfully. Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Oct 5 03:57:52 localhost podman[53865]: 2025-10-05 07:57:48.254449369 +0000 UTC m=+0.046270679 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Oct 5 03:57:52 localhost puppet-user[52962]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4' Oct 5 03:57:52 localhost puppet-user[52962]: Notice: Applied catalog in 4.51 seconds Oct 5 03:57:52 localhost puppet-user[52962]: Application: Oct 5 03:57:52 localhost puppet-user[52962]: Initial environment: production Oct 5 03:57:52 localhost puppet-user[52962]: Converged environment: production Oct 5 03:57:52 localhost puppet-user[52962]: Run mode: user Oct 5 03:57:52 localhost puppet-user[52962]: Changes: Oct 5 03:57:52 localhost puppet-user[52962]: Total: 183 Oct 5 03:57:52 localhost puppet-user[52962]: Events: Oct 5 03:57:52 localhost puppet-user[52962]: Success: 183 Oct 5 03:57:52 localhost puppet-user[52962]: Total: 183 Oct 5 03:57:52 localhost puppet-user[52962]: Resources: Oct 5 03:57:52 localhost puppet-user[52962]: Changed: 183 Oct 5 03:57:52 localhost puppet-user[52962]: Out of sync: 183 Oct 5 03:57:52 localhost puppet-user[52962]: Skipped: 57 Oct 5 03:57:52 localhost puppet-user[52962]: Total: 487 Oct 5 03:57:52 localhost puppet-user[52962]: Time: Oct 5 03:57:52 localhost puppet-user[52962]: Concat file: 0.00 Oct 5 03:57:52 localhost puppet-user[52962]: Concat fragment: 0.00 Oct 5 03:57:52 localhost puppet-user[52962]: Anchor: 0.00 Oct 5 03:57:52 localhost puppet-user[52962]: File line: 0.00 Oct 5 03:57:52 localhost puppet-user[52962]: Virtlogd config: 0.00 Oct 5 03:57:52 localhost puppet-user[52962]: Virtstoraged config: 0.01 Oct 5 03:57:52 localhost puppet-user[52962]: Virtqemud config: 0.01 Oct 5 03:57:52 localhost puppet-user[52962]: Virtsecretd config: 0.01 Oct 5 03:57:52 localhost puppet-user[52962]: Virtnodedevd config: 0.01 Oct 5 03:57:52 localhost puppet-user[52962]: Exec: 0.01 Oct 5 03:57:52 localhost puppet-user[52962]: Package: 0.02 Oct 5 03:57:52 localhost puppet-user[52962]: Virtproxyd config: 0.03 Oct 5 03:57:52 localhost puppet-user[52962]: File: 0.03 Oct 5 03:57:52 localhost puppet-user[52962]: Augeas: 0.91 Oct 5 03:57:52 localhost puppet-user[52962]: Config retrieval: 1.52 Oct 5 03:57:52 localhost puppet-user[52962]: Last run: 1759651072 Oct 5 03:57:52 localhost puppet-user[52962]: Nova config: 3.26 Oct 5 03:57:52 localhost puppet-user[52962]: Transaction evaluation: 4.50 Oct 5 03:57:52 localhost puppet-user[52962]: Catalog application: 4.51 Oct 5 03:57:52 localhost puppet-user[52962]: Resources: 0.00 Oct 5 03:57:52 localhost puppet-user[52962]: Total: 4.51 Oct 5 03:57:52 localhost puppet-user[52962]: Version: Oct 5 03:57:52 localhost puppet-user[52962]: Config: 1759651066 Oct 5 03:57:52 localhost puppet-user[52962]: Puppet: 7.10.0 Oct 5 03:57:53 localhost podman[54692]: 2025-10-05 07:57:53.026417058 +0000 UTC m=+0.093080412 container create 39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, container_name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1) Oct 5 03:57:53 localhost systemd[1]: Started libpod-conmon-39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126.scope. Oct 5 03:57:53 localhost podman[54692]: 2025-10-05 07:57:52.976751267 +0000 UTC m=+0.043414621 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 5 03:57:53 localhost systemd[1]: Started libcrun container. Oct 5 03:57:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1db00d87bf07e127185efa8d09f579d03087a66661822459d6f980d1744528c7/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Oct 5 03:57:53 localhost podman[54692]: 2025-10-05 07:57:53.100844857 +0000 UTC m=+0.167508221 container init 39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, container_name=container-puppet-neutron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, version=17.1.9, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, vcs-type=git, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12) Oct 5 03:57:53 localhost podman[54692]: 2025-10-05 07:57:53.10896474 +0000 UTC m=+0.175628094 container start 39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server) Oct 5 03:57:53 localhost podman[54692]: 2025-10-05 07:57:53.109466933 +0000 UTC m=+0.176130297 container attach 39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, container_name=container-puppet-neutron, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:44:03, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 03:57:53 localhost systemd[1]: libpod-aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e.scope: Deactivated successfully. Oct 5 03:57:53 localhost systemd[1]: libpod-aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e.scope: Consumed 8.394s CPU time. Oct 5 03:57:53 localhost podman[52827]: 2025-10-05 07:57:53.766995571 +0000 UTC m=+9.906264468 container died aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, release=2, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 03:57:53 localhost podman[54759]: 2025-10-05 07:57:53.930409368 +0000 UTC m=+0.156536709 container cleanup aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, version=17.1.9, build-date=2025-07-21T14:56:59, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, release=2, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 03:57:53 localhost systemd[1]: libpod-conmon-aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e.scope: Deactivated successfully. Oct 5 03:57:53 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 03:57:54 localhost systemd[1]: var-lib-containers-storage-overlay-5669f7c33f2cbae2c100aead59c5bc55d637c9fe9224f3ab6a48af0ed1c37483-merged.mount: Deactivated successfully. Oct 5 03:57:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa25539925ef3af308e8e9dff03b8b9b35ea9e2ba0f3cdd121e9a4789f1cc68e-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:54 localhost puppet-user[54724]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Oct 5 03:57:55 localhost puppet-user[54724]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 03:57:55 localhost puppet-user[54724]: (file: /etc/puppet/hiera.yaml) Oct 5 03:57:55 localhost puppet-user[54724]: Warning: Undefined variable '::deploy_config_name'; Oct 5 03:57:55 localhost puppet-user[54724]: (file & line not available) Oct 5 03:57:55 localhost puppet-user[54724]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 03:57:55 localhost puppet-user[54724]: (file & line not available) Oct 5 03:57:55 localhost puppet-user[54724]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Oct 5 03:57:55 localhost puppet-user[54724]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.69 seconds Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Oct 5 03:57:55 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Oct 5 03:57:56 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Oct 5 03:57:56 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Oct 5 03:57:56 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Oct 5 03:57:56 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Oct 5 03:57:56 localhost puppet-user[54724]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Oct 5 03:57:56 localhost puppet-user[54724]: Notice: Applied catalog in 0.45 seconds Oct 5 03:57:56 localhost puppet-user[54724]: Application: Oct 5 03:57:56 localhost puppet-user[54724]: Initial environment: production Oct 5 03:57:56 localhost puppet-user[54724]: Converged environment: production Oct 5 03:57:56 localhost puppet-user[54724]: Run mode: user Oct 5 03:57:56 localhost puppet-user[54724]: Changes: Oct 5 03:57:56 localhost puppet-user[54724]: Total: 33 Oct 5 03:57:56 localhost puppet-user[54724]: Events: Oct 5 03:57:56 localhost puppet-user[54724]: Success: 33 Oct 5 03:57:56 localhost puppet-user[54724]: Total: 33 Oct 5 03:57:56 localhost puppet-user[54724]: Resources: Oct 5 03:57:56 localhost puppet-user[54724]: Skipped: 21 Oct 5 03:57:56 localhost puppet-user[54724]: Changed: 33 Oct 5 03:57:56 localhost puppet-user[54724]: Out of sync: 33 Oct 5 03:57:56 localhost puppet-user[54724]: Total: 155 Oct 5 03:57:56 localhost puppet-user[54724]: Time: Oct 5 03:57:56 localhost puppet-user[54724]: Resources: 0.00 Oct 5 03:57:56 localhost puppet-user[54724]: Ovn metadata agent config: 0.02 Oct 5 03:57:56 localhost puppet-user[54724]: Neutron config: 0.37 Oct 5 03:57:56 localhost puppet-user[54724]: Transaction evaluation: 0.45 Oct 5 03:57:56 localhost puppet-user[54724]: Catalog application: 0.45 Oct 5 03:57:56 localhost puppet-user[54724]: Config retrieval: 0.76 Oct 5 03:57:56 localhost puppet-user[54724]: Last run: 1759651076 Oct 5 03:57:56 localhost puppet-user[54724]: Total: 0.45 Oct 5 03:57:56 localhost puppet-user[54724]: Version: Oct 5 03:57:56 localhost puppet-user[54724]: Config: 1759651075 Oct 5 03:57:56 localhost puppet-user[54724]: Puppet: 7.10.0 Oct 5 03:57:56 localhost systemd[1]: libpod-39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126.scope: Deactivated successfully. Oct 5 03:57:56 localhost systemd[1]: libpod-39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126.scope: Consumed 3.722s CPU time. Oct 5 03:57:56 localhost podman[54692]: 2025-10-05 07:57:56.857121535 +0000 UTC m=+3.923784949 container died 39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, architecture=x86_64, distribution-scope=public, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, release=1, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d) Oct 5 03:57:56 localhost systemd[1]: tmp-crun.IksyWf.mount: Deactivated successfully. Oct 5 03:57:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126-userdata-shm.mount: Deactivated successfully. Oct 5 03:57:56 localhost systemd[1]: var-lib-containers-storage-overlay-1db00d87bf07e127185efa8d09f579d03087a66661822459d6f980d1744528c7-merged.mount: Deactivated successfully. Oct 5 03:57:56 localhost podman[54910]: 2025-10-05 07:57:56.978660756 +0000 UTC m=+0.113712998 container cleanup 39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 03:57:56 localhost systemd[1]: libpod-conmon-39fa8fce28857fc10107fe0563c0c55efe8477daac4b9bd4265d839e4e2d6126.scope: Deactivated successfully. Oct 5 03:57:56 localhost python3[52643]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005471152 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005471152', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Oct 5 03:57:57 localhost python3[54966]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:57:58 localhost python3[54998]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:57:59 localhost python3[55048]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:57:59 localhost python3[55091]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651079.0941026-84965-61421782770099/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:00 localhost python3[55153]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:00 localhost python3[55196]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651079.988139-84965-239303993804699/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:01 localhost python3[55258]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:01 localhost python3[55301]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651080.9032235-85043-103338733803089/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:02 localhost python3[55363]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:02 localhost python3[55406]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651081.8172276-85134-189433816414137/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:03 localhost python3[55436]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:58:03 localhost systemd[1]: Reloading. Oct 5 03:58:03 localhost systemd-sysv-generator[55463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:03 localhost systemd-rc-local-generator[55457]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:03 localhost systemd[1]: Reloading. Oct 5 03:58:03 localhost systemd-rc-local-generator[55496]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:03 localhost systemd-sysv-generator[55502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:03 localhost systemd[1]: Starting TripleO Container Shutdown... Oct 5 03:58:03 localhost systemd[1]: Finished TripleO Container Shutdown. Oct 5 03:58:04 localhost python3[55560]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:04 localhost python3[55603]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651083.7568102-85173-189230175002087/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:05 localhost python3[55665]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 03:58:05 localhost python3[55708]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651084.6875412-85202-256442452879761/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:05 localhost python3[55738]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:58:05 localhost systemd[1]: Reloading. Oct 5 03:58:06 localhost systemd-sysv-generator[55768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:06 localhost systemd-rc-local-generator[55765]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:06 localhost systemd[1]: Reloading. Oct 5 03:58:06 localhost systemd-rc-local-generator[55798]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:06 localhost systemd-sysv-generator[55802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:06 localhost systemd[1]: Starting Create netns directory... Oct 5 03:58:06 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 03:58:06 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 03:58:06 localhost systemd[1]: Finished Create netns directory. Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 10ed3ae740a3c584de5be73e09f3fdc3 Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 4f35ee3aff3ccdd22a731d50021565d5 Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: c451d5e94e858df36b636f2835a46cda Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 7ae8f92d3eaef9724f650e9e8c537f24 Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 7ae8f92d3eaef9724f650e9e8c537f24 Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 61cb19106b923f6601e2c325a34cdd49 Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:06 localhost python3[55831]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 5d5b173631792e25c080b07e9b3e041b Oct 5 03:58:08 localhost python3[55889]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 03:58:08 localhost podman[55928]: 2025-10-05 07:58:08.676827701 +0000 UTC m=+0.077703979 container create 77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr_init_logs, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Oct 5 03:58:08 localhost systemd[1]: Started libpod-conmon-77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683.scope. Oct 5 03:58:08 localhost systemd[1]: Started libcrun container. Oct 5 03:58:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265ee1c6a66a7a26bd10096fe90cded0c1a62994fc36010106069b2755e4df7c/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 5 03:58:08 localhost podman[55928]: 2025-10-05 07:58:08.741747651 +0000 UTC m=+0.142623949 container init 77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, container_name=metrics_qdr_init_logs, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, config_id=tripleo_step1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:58:08 localhost podman[55928]: 2025-10-05 07:58:08.643224201 +0000 UTC m=+0.044100489 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:58:08 localhost podman[55928]: 2025-10-05 07:58:08.749310578 +0000 UTC m=+0.150186876 container start 77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, release=1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, container_name=metrics_qdr_init_logs, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible) Oct 5 03:58:08 localhost podman[55928]: 2025-10-05 07:58:08.750125101 +0000 UTC m=+0.151001409 container attach 77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, container_name=metrics_qdr_init_logs, version=17.1.9, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:58:08 localhost systemd[1]: libpod-77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683.scope: Deactivated successfully. Oct 5 03:58:08 localhost podman[55928]: 2025-10-05 07:58:08.758110409 +0000 UTC m=+0.158986707 container died 77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 03:58:08 localhost podman[55947]: 2025-10-05 07:58:08.8337015 +0000 UTC m=+0.063680425 container cleanup 77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, tcib_managed=true, container_name=metrics_qdr_init_logs, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 03:58:08 localhost systemd[1]: libpod-conmon-77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683.scope: Deactivated successfully. Oct 5 03:58:08 localhost python3[55889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Oct 5 03:58:09 localhost podman[56023]: 2025-10-05 07:58:09.24279206 +0000 UTC m=+0.061372062 container create 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 5 03:58:09 localhost systemd[1]: Started libpod-conmon-9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.scope. Oct 5 03:58:09 localhost systemd[1]: Started libcrun container. Oct 5 03:58:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c9c6b2f01f047207aca223ed13c75d75c3b5dfe8b2b9d0938721ee5dd381ac/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 5 03:58:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c9c6b2f01f047207aca223ed13c75d75c3b5dfe8b2b9d0938721ee5dd381ac/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Oct 5 03:58:09 localhost podman[56023]: 2025-10-05 07:58:09.213146828 +0000 UTC m=+0.031726870 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 03:58:09 localhost podman[56023]: 2025-10-05 07:58:09.32780503 +0000 UTC m=+0.146385042 container init 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 03:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 03:58:09 localhost podman[56023]: 2025-10-05 07:58:09.351964831 +0000 UTC m=+0.170544803 container start 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=metrics_qdr, architecture=x86_64, tcib_managed=true, vcs-type=git) Oct 5 03:58:09 localhost python3[55889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=10ed3ae740a3c584de5be73e09f3fdc3 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Oct 5 03:58:09 localhost podman[56044]: 2025-10-05 07:58:09.505469968 +0000 UTC m=+0.147811412 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 03:58:09 localhost systemd[1]: var-lib-containers-storage-overlay-265ee1c6a66a7a26bd10096fe90cded0c1a62994fc36010106069b2755e4df7c-merged.mount: Deactivated successfully. Oct 5 03:58:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77ad27d43a1c2b306cab33a929c2f9456f06caacef6bf4074dc0c44d81d7d683-userdata-shm.mount: Deactivated successfully. Oct 5 03:58:09 localhost podman[56044]: 2025-10-05 07:58:09.693214162 +0000 UTC m=+0.335555676 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Oct 5 03:58:09 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 03:58:10 localhost python3[56119]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:10 localhost python3[56135]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 03:58:10 localhost python3[56196]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651090.3430123-85313-214935508066157/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:11 localhost python3[56212]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 03:58:11 localhost systemd[1]: Reloading. Oct 5 03:58:11 localhost systemd-rc-local-generator[56233]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:11 localhost systemd-sysv-generator[56238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:12 localhost python3[56264]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 03:58:12 localhost systemd[1]: Reloading. Oct 5 03:58:12 localhost systemd-rc-local-generator[56294]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 03:58:12 localhost systemd-sysv-generator[56299]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 03:58:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 03:58:12 localhost systemd[1]: Starting metrics_qdr container... Oct 5 03:58:12 localhost systemd[1]: Started metrics_qdr container. Oct 5 03:58:12 localhost python3[56345]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:14 localhost python3[56466]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005471152 step=1 update_config_hash_only=False Oct 5 03:58:14 localhost python3[56482]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 03:58:15 localhost python3[56498]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 03:58:31 localhost sshd[56499]: main: sshd: ssh-rsa algorithm is disabled Oct 5 03:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 03:58:39 localhost podman[56500]: 2025-10-05 07:58:39.880580053 +0000 UTC m=+0.049552609 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9) Oct 5 03:58:40 localhost podman[56500]: 2025-10-05 07:58:40.041184173 +0000 UTC m=+0.210156779 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, config_id=tripleo_step1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 03:58:40 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 03:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 03:59:10 localhost podman[56607]: 2025-10-05 07:59:10.913953233 +0000 UTC m=+0.080959749 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=) Oct 5 03:59:11 localhost podman[56607]: 2025-10-05 07:59:11.139277597 +0000 UTC m=+0.306284083 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1, version=17.1.9, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 03:59:11 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 03:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 03:59:41 localhost systemd[1]: tmp-crun.QJz8QZ.mount: Deactivated successfully. Oct 5 03:59:41 localhost podman[56636]: 2025-10-05 07:59:41.917304818 +0000 UTC m=+0.085058139 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 03:59:42 localhost podman[56636]: 2025-10-05 07:59:42.139613449 +0000 UTC m=+0.307366730 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 03:59:42 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:00:12 localhost podman[56745]: 2025-10-05 08:00:12.926021637 +0000 UTC m=+0.091094984 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 04:00:13 localhost podman[56745]: 2025-10-05 08:00:13.237084128 +0000 UTC m=+0.402157455 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:00:13 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:00:43 localhost podman[56774]: 2025-10-05 08:00:43.908222043 +0000 UTC m=+0.074327878 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, release=1) Oct 5 04:00:44 localhost podman[56774]: 2025-10-05 08:00:44.086922926 +0000 UTC m=+0.253028751 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, container_name=metrics_qdr, release=1, build-date=2025-07-21T13:07:59, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:00:44 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:01:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:01:14 localhost podman[56890]: 2025-10-05 08:01:14.913113836 +0000 UTC m=+0.084741022 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.9, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, release=1) Oct 5 04:01:15 localhost podman[56890]: 2025-10-05 08:01:15.132246512 +0000 UTC m=+0.303873718 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:07:59, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:01:15 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:01:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:01:45 localhost systemd[1]: tmp-crun.9TO1Cj.mount: Deactivated successfully. Oct 5 04:01:45 localhost podman[56920]: 2025-10-05 08:01:45.911524973 +0000 UTC m=+0.082847803 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, release=1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:01:46 localhost podman[56920]: 2025-10-05 08:01:46.092143691 +0000 UTC m=+0.263466451 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, release=1) Oct 5 04:01:46 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:02:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:02:16 localhost podman[57025]: 2025-10-05 08:02:16.925999109 +0000 UTC m=+0.090310355 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-07-21T13:07:59) Oct 5 04:02:17 localhost podman[57025]: 2025-10-05 08:02:17.099048523 +0000 UTC m=+0.263359749 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64) Oct 5 04:02:17 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:02:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:02:47 localhost podman[57056]: 2025-10-05 08:02:47.905854251 +0000 UTC m=+0.077207562 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:02:48 localhost podman[57056]: 2025-10-05 08:02:48.097164606 +0000 UTC m=+0.268517927 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, release=1, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:02:48 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:03:18 localhost systemd[1]: tmp-crun.Ub76Sw.mount: Deactivated successfully. Oct 5 04:03:18 localhost podman[57162]: 2025-10-05 08:03:18.887887132 +0000 UTC m=+0.060156741 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Oct 5 04:03:19 localhost podman[57162]: 2025-10-05 08:03:19.044291798 +0000 UTC m=+0.216561377 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:07:59, version=17.1.9, distribution-scope=public, release=1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:03:19 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:03:49 localhost systemd[1]: tmp-crun.NEiI32.mount: Deactivated successfully. Oct 5 04:03:49 localhost podman[57189]: 2025-10-05 08:03:49.8874993 +0000 UTC m=+0.063472752 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:03:50 localhost podman[57189]: 2025-10-05 08:03:50.072038843 +0000 UTC m=+0.248012285 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, release=1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 5 04:03:50 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:03:53 localhost ceph-osd[32468]: osd.3 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2,3,1] r=1 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:03:55 localhost ceph-osd[32468]: osd.3 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,2,1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:03:56 localhost ceph-osd[32468]: osd.3 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,2,1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:03:57 localhost ceph-osd[32468]: osd.3 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [4,5,3] r=2 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:03:58 localhost ceph-osd[32468]: osd.3 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [3,4,2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:03:59 localhost ceph-osd[32468]: osd.3 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [3,4,2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:16 localhost ceph-osd[32468]: osd.3 pg_epoch: 30 pg[6.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [4,5,3] r=2 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:17 localhost ceph-osd[31524]: osd.0 pg_epoch: 31 pg[7.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [5,0,4] r=1 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:04:20 localhost podman[57312]: 2025-10-05 08:04:20.921141464 +0000 UTC m=+0.086260894 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, batch=17.1_20250721.1, container_name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true) Oct 5 04:04:21 localhost podman[57312]: 2025-10-05 08:04:21.113617453 +0000 UTC m=+0.278736913 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:07:59, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Oct 5 04:04:21 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:04:29 localhost ceph-osd[32468]: osd.3 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=12.317137718s) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active pruub 1176.505981445s@ mbc={}] start_peering_interval up [2,3,1] -> [2,3,1], acting [2,3,1] -> [2,3,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:29 localhost ceph-osd[32468]: osd.3 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=12.314544678s) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1176.505981445s@ mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.16( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.17( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.15( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.14( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.18( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.13( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.12( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.11( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.10( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.9( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.6( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.7( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.2( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.4( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.3( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.5( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.8( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.19( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:30 localhost ceph-osd[32468]: osd.3 pg_epoch: 36 pg[2.1e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2,3,1] r=1 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:31 localhost ceph-osd[32468]: osd.3 pg_epoch: 37 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=14.553058624s) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 1180.790527344s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:31 localhost ceph-osd[32468]: osd.3 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=12.564229965s) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 1178.801757812s@ mbc={}] start_peering_interval up [3,2,1] -> [3,2,1], acting [3,2,1] -> [3,2,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:31 localhost ceph-osd[32468]: osd.3 pg_epoch: 37 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=14.549359322s) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.790527344s@ mbc={}] state: transitioning to Stray Oct 5 04:04:31 localhost ceph-osd[32468]: osd.3 pg_epoch: 37 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=12.564229965s) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1178.801757812s@ mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.11( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.10( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.13( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.19( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.12( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.15( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.8( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.14( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.7( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.17( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.16( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.9( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.6( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.2( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.4( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.5( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.3( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.1d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[4.18( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [4,5,3] r=2 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:32 localhost ceph-osd[32468]: osd.3 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [3,2,1] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:33 localhost ceph-osd[32468]: osd.3 pg_epoch: 39 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=39 pruub=14.828710556s) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 active pruub 1183.084838867s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:33 localhost ceph-osd[32468]: osd.3 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.357094765s) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 1182.613281250s@ mbc={}] start_peering_interval up [3,4,2] -> [3,4,2], acting [3,4,2] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:33 localhost ceph-osd[32468]: osd.3 pg_epoch: 39 pg[6.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=39 pruub=14.826662064s) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1183.084838867s@ mbc={}] state: transitioning to Stray Oct 5 04:04:33 localhost ceph-osd[32468]: osd.3 pg_epoch: 39 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.357094765s) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1182.613281250s@ mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.0 scrub starts Oct 5 04:04:34 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.0 scrub ok Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.b( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.9( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.2( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.6( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.3( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1b( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.c( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.e( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1a( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.19( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.18( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1f( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1e( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1d( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.7( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.d( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.12( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.f( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.5( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.4( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.8( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.10( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.a( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.13( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.11( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.14( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.17( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.15( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.16( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[6.1c( empty local-lis/les=30/31 n=0 ec=39/30 lis/c=30/30 les/c/f=31/31/0 sis=39) [4,5,3] r=2 lpr=39 pi=[30,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:34 localhost ceph-osd[32468]: osd.3 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [3,4,2] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:35 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1f scrub starts Oct 5 04:04:35 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1f scrub ok Oct 5 04:04:35 localhost ceph-osd[31524]: osd.0 pg_epoch: 41 pg[7.0( v 33'39 (0'0,33'39] local-lis/les=31/32 n=22 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=13.814489365s) [5,0,4] r=1 lpr=41 pi=[31,41)/1 luod=0'0 lua=33'37 crt=33'39 lcod 33'38 mlcod 0'0 active pruub 1188.531738281s@ mbc={}] start_peering_interval up [5,0,4] -> [5,0,4], acting [5,0,4] -> [5,0,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:35 localhost ceph-osd[31524]: osd.0 pg_epoch: 41 pg[7.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=13.812973976s) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 lcod 33'38 mlcod 0'0 unknown NOTIFY pruub 1188.531738281s@ mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1c scrub starts Oct 5 04:04:36 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1c scrub ok Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.1( v 33'39 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=2 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:36 localhost ceph-osd[31524]: osd.0 pg_epoch: 42 pg[7.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=33'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Oct 5 04:04:37 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1b scrub starts Oct 5 04:04:37 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1b scrub ok Oct 5 04:04:38 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts Oct 5 04:04:38 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,1,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.230087280s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.792968750s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.230493546s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.793579102s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.230461121s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.793579102s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229988098s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.793090820s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.1( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.224687576s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.787841797s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229956627s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.793090820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.1( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.224658012s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.787841797s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229846954s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.793090820s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229813576s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.793090820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.230027199s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.793334961s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229239464s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.792602539s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.230006218s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.793334961s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229970932s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1191.793334961s@ mbc={}] start_peering_interval up [5,0,4] -> [2,0,4], acting [5,0,4] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229951859s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.793334961s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229178429s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.792602539s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=8.229458809s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1191.792968750s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,2,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,2,1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112504005s) [1,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270751953s@ mbc={}] start_peering_interval up [3,2,1] -> [1,2,3], acting [3,2,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.112431526s) [1,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270751953s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.155695915s) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314453125s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,1], acting [3,4,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.155657768s) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314453125s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.153452873s) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.312500000s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,1], acting [3,4,2] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.070266724s) [3,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229003906s@ mbc={}] start_peering_interval up [2,3,1] -> [3,2,4], acting [2,3,1] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.070266724s) [3,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.229003906s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115897179s) [2,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.275146484s@ mbc={}] start_peering_interval up [3,2,1] -> [2,1,3], acting [3,2,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115865707s) [2,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.275146484s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151266098s) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310791016s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,4], acting [3,4,2] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151233673s) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310791016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110733986s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270385742s@ mbc={}] start_peering_interval up [3,2,1] -> [1,5,0], acting [3,2,1] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.069014549s) [0,1,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.228881836s@ mbc={}] start_peering_interval up [2,3,1] -> [0,1,2], acting [2,3,1] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110705376s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270385742s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.068983078s) [0,1,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.228881836s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.150979996s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310791016s@ mbc={}] start_peering_interval up [3,4,2] -> [3,5,4], acting [3,4,2] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.150979996s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.310791016s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110455513s) [2,3,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270385742s@ mbc={}] start_peering_interval up [3,2,1] -> [2,3,4], acting [3,2,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.110422134s) [2,3,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270385742s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151774406s) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.311889648s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,5], acting [3,4,2] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151774406s) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.311889648s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114984512s) [0,2,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.275146484s@ mbc={}] start_peering_interval up [3,2,1] -> [0,2,4], acting [3,2,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114934921s) [0,2,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.275146484s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.069301605s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229736328s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,4], acting [2,3,1] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.069270134s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229736328s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109893799s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270385742s@ mbc={}] start_peering_interval up [3,2,1] -> [1,5,0], acting [3,2,1] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109854698s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270385742s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.069011688s) [3,1,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229614258s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,2], acting [2,3,1] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.069011688s) [3,1,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.229614258s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.151689529s) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.312500000s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109299660s) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270263672s@ mbc={}] start_peering_interval up [3,2,1] -> [3,5,4], acting [3,2,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109299660s) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.270263672s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109730721s) [2,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270996094s@ mbc={}] start_peering_interval up [3,2,1] -> [2,4,3], acting [3,2,1] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.068675995s) [0,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229980469s@ mbc={}] start_peering_interval up [2,3,1] -> [0,2,4], acting [2,3,1] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.068331718s) [3,1,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229614258s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,5], acting [2,3,1] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.108826637s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270263672s@ mbc={}] start_peering_interval up [3,2,1] -> [0,5,1], acting [3,2,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.068110466s) [0,2,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229980469s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.067371368s) [0,5,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229736328s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,1], acting [2,3,1] -> [0,5,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.067343712s) [0,5,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229736328s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.067351341s) [3,1,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229858398s@ mbc={}] start_peering_interval up [2,3,1] -> [3,1,5], acting [2,3,1] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.067351341s) [3,1,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.229858398s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.109041214s) [2,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270996094s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.068331718s) [3,1,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.229614258s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.067001343s) [0,4,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229858398s@ mbc={}] start_peering_interval up [2,3,1] -> [0,4,2], acting [2,3,1] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.107012749s) [3,1,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [3,1,5], acting [3,2,1] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066896439s) [0,4,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229858398s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066609383s) [3,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229736328s@ mbc={}] start_peering_interval up [2,3,1] -> [3,5,4], acting [2,3,1] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066780090s) [1,0,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229980469s@ mbc={}] start_peering_interval up [2,3,1] -> [1,0,5], acting [2,3,1] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.107012749s) [3,1,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.270019531s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066748619s) [1,0,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229980469s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.106824875s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270263672s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.106654167s) [2,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270141602s@ mbc={}] start_peering_interval up [3,2,1] -> [2,4,3], acting [3,2,1] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066609383s) [3,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.229736328s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066542625s) [3,4,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230102539s@ mbc={}] start_peering_interval up [2,3,1] -> [3,4,2], acting [2,3,1] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066542625s) [3,4,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.230102539s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.106609344s) [2,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270141602s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.106092453s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [1,3,2], acting [3,2,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066247940s) [1,0,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230102539s@ mbc={}] start_peering_interval up [2,3,1] -> [1,0,2], acting [2,3,1] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.106036186s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066219330s) [1,0,2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230102539s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105901718s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [0,5,1], acting [3,2,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105872154s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066214561s) [2,0,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230468750s@ mbc={}] start_peering_interval up [2,3,1] -> [2,0,1], acting [2,3,1] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.066173553s) [2,0,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230468750s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105740547s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270141602s@ mbc={}] start_peering_interval up [3,2,1] -> [0,1,2], acting [3,2,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105699539s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270141602s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065929413s) [3,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230468750s@ mbc={}] start_peering_interval up [2,3,1] -> [3,2,4], acting [2,3,1] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065929413s) [3,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.230468750s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065779686s) [0,2,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230468750s@ mbc={}] start_peering_interval up [2,3,1] -> [0,2,1], acting [2,3,1] -> [0,2,1], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065748215s) [0,2,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230468750s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105573654s) [5,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270385742s@ mbc={}] start_peering_interval up [3,2,1] -> [5,4,3], acting [3,2,1] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065511703s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230468750s@ mbc={}] start_peering_interval up [2,3,1] -> [0,5,4], acting [2,3,1] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105465889s) [5,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270385742s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.104937553s) [1,5,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.269897461s@ mbc={}] start_peering_interval up [3,2,1] -> [1,5,3], acting [3,2,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.104907990s) [1,5,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.269897461s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141961098s) [3,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.307128906s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.148106575s) [0,4,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313354492s@ mbc={}] start_peering_interval up [3,4,2] -> [0,4,5], acting [3,4,2] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141961098s) [3,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.307128906s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.148065567s) [0,4,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313354492s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065117836s) [4,2,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230590820s@ mbc={}] start_peering_interval up [2,3,1] -> [4,2,3], acting [2,3,1] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.065093994s) [4,2,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230590820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127584457s) [3,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293212891s@ mbc={}] start_peering_interval up [4,5,3] -> [3,1,2], acting [4,5,3] -> [3,1,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127584457s) [3,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.293212891s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.148950577s) [2,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314697266s@ mbc={}] start_peering_interval up [3,4,2] -> [2,0,4], acting [3,4,2] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.148922920s) [2,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314697266s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142817497s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308715820s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,1], acting [4,5,3] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142791748s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.308715820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146857262s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.312988281s@ mbc={}] start_peering_interval up [3,4,2] -> [0,1,5], acting [3,4,2] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146825790s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.312988281s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.103642464s) [4,3,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [4,3,5], acting [3,2,1] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.103589058s) [4,3,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.103631020s) [2,3,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [2,3,4], acting [3,2,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146556854s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.312866211s@ mbc={}] start_peering_interval up [3,4,2] -> [3,5,4], acting [3,4,2] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.103572845s) [2,3,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.063920975s) [4,0,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230590820s@ mbc={}] start_peering_interval up [2,3,1] -> [4,0,5], acting [2,3,1] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.126475334s) [2,1,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293212891s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.063885689s) [4,0,5] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230590820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.126447678s) [2,1,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.293212891s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146556854s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.312866211s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136990547s) [1,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.303833008s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,0], acting [3,4,2] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136959076s) [1,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.303833008s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143193245s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310302734s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143166542s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310302734s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127193451s) [2,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294433594s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,3], acting [4,5,3] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.127170563s) [2,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.294433594s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146583557s) [0,5,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313964844s@ mbc={}] start_peering_interval up [3,4,2] -> [0,5,4], acting [3,4,2] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146558762s) [0,5,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313964844s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.18( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141189575s) [4,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308715820s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.18( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141161919s) [4,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.308715820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.064352989s) [0,5,4] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230468750s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140785217s) [4,0,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308715820s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,5], acting [4,5,3] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140755653s) [4,0,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.308715820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125827789s) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293823242s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145935059s) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314086914s@ mbc={}] start_peering_interval up [3,4,2] -> [2,3,4], acting [3,4,2] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125827789s) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.293823242s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145904541s) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314086914s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139794350s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308227539s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139794350s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.308227539s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125083923s) [2,4,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293457031s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145694733s) [2,4,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314086914s@ mbc={}] start_peering_interval up [3,4,2] -> [2,4,3], acting [3,4,2] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125781059s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294311523s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.125708580s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.294311523s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145631790s) [2,4,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314086914s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.150279045s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314208984s@ mbc={}] start_peering_interval up [3,4,2] -> [1,2,3], acting [3,4,2] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145490646s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314208984s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140096664s) [4,0,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308837891s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,5], acting [4,5,3] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.061480522s) [4,2,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230224609s@ mbc={}] start_peering_interval up [2,3,1] -> [4,2,3], acting [2,3,1] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.146216393s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314819336s@ mbc={}] start_peering_interval up [3,4,2] -> [1,2,0], acting [3,4,2] -> [1,2,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.061295509s) [4,2,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230224609s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.124534607s) [2,4,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.293457031s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139802933s) [4,0,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.308837891s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.145635605s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314819336s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105910301s) [4,0,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.275146484s@ mbc={}] start_peering_interval up [3,2,1] -> [4,0,5], acting [3,2,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138132095s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.307495117s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.105856895s) [4,0,5] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.275146484s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.060896873s) [2,0,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230224609s@ mbc={}] start_peering_interval up [2,3,1] -> [2,0,1], acting [2,3,1] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.138098717s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.307495117s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.060863495s) [2,0,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230224609s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122509003s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291992188s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137291908s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.306884766s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.137291908s) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.306884766s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143666267s) [5,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313354492s@ mbc={}] start_peering_interval up [3,4,2] -> [5,0,4], acting [3,4,2] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143596649s) [5,0,4] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313354492s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.123079300s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.292846680s@ mbc={}] start_peering_interval up [4,5,3] -> [0,5,1], acting [4,5,3] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122427940s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.291992188s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.123055458s) [0,5,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.292846680s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.124218941s) [3,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294189453s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,1], acting [4,5,3] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.100066185s) [4,3,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [4,3,5], acting [3,2,1] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139560699s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.309448242s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143883705s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313842773s@ mbc={}] start_peering_interval up [3,4,2] -> [5,1,3], acting [3,4,2] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.124218941s) [3,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.294189453s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.099650383s) [4,3,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139073372s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.309448242s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.059905052s) [5,4,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230590820s@ mbc={}] start_peering_interval up [2,3,1] -> [5,4,3], acting [2,3,1] -> [5,4,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.059875488s) [5,4,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230590820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122472763s) [5,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293212891s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,1], acting [4,5,3] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143343925s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314086914s@ mbc={}] start_peering_interval up [3,4,2] -> [0,1,5], acting [3,4,2] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.143316269s) [0,1,5] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314086914s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.122424126s) [5,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.293212891s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136221886s) [1,5,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.307128906s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.059040070s) [5,1,0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230102539s@ mbc={}] start_peering_interval up [2,3,1] -> [5,1,0], acting [2,3,1] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142787933s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313842773s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136195183s) [1,5,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.307128906s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.058992386s) [5,1,0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230102539s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.098686218s) [5,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [5,4,3], acting [3,2,1] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121417046s) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.292724609s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.098649025s) [5,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121417046s) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.292724609s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142498016s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313964844s@ mbc={}] start_peering_interval up [3,4,2] -> [4,5,0], acting [3,4,2] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.142442703s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313964844s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135303497s) [5,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.306884766s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.098484993s) [4,2,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270141602s@ mbc={}] start_peering_interval up [3,2,1] -> [4,2,0], acting [3,2,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135272026s) [5,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.306884766s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.098444939s) [4,2,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270141602s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.058556557s) [5,0,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230224609s@ mbc={}] start_peering_interval up [2,3,1] -> [5,0,1], acting [2,3,1] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121313095s) [3,2,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293090820s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121313095s) [3,2,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.293090820s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.058506966s) [5,0,1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230224609s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134961128s) [5,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.306884766s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,0], acting [4,5,3] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134926796s) [5,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.306884766s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.098046303s) [4,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [4,3,2], acting [3,2,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.098017693s) [4,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136875153s) [5,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308959961s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136841774s) [5,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.308959961s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120965004s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.292968750s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,2], acting [4,5,3] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141440392s) [2,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313598633s@ mbc={}] start_peering_interval up [3,4,2] -> [2,4,0], acting [3,4,2] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.057964325s) [4,3,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230102539s@ mbc={}] start_peering_interval up [2,3,1] -> [4,3,5], acting [2,3,1] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120840073s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.292968750s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141386986s) [2,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313598633s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.057938576s) [4,3,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230102539s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121321678s) [1,0,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293579102s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121292114s) [1,0,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.293579102s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141497612s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313842773s@ mbc={}] start_peering_interval up [3,4,2] -> [4,5,0], acting [3,4,2] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.141469002s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313842773s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136794090s) [4,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.309204102s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136761665s) [4,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.309204102s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.102389336s) [4,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.275024414s@ mbc={}] start_peering_interval up [3,2,1] -> [4,2,3], acting [3,2,1] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.102345467s) [4,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.275024414s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121368408s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294189453s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.121342659s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.294189453s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140964508s) [5,1,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313842773s@ mbc={}] start_peering_interval up [3,4,2] -> [5,1,0], acting [3,4,2] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140933990s) [5,1,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313842773s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.056927681s) [4,5,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229980469s@ mbc={}] start_peering_interval up [2,3,1] -> [4,5,3], acting [2,3,1] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.056902885s) [4,5,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229980469s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135725975s) [2,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.308837891s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135692596s) [2,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.308837891s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134570122s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.307983398s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140279770s) [4,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313598633s@ mbc={}] start_peering_interval up [3,4,2] -> [4,2,0], acting [3,4,2] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134542465s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.307983398s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140243530s) [4,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313598633s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118613243s) [5,4,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.292114258s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,0], acting [4,5,3] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136395454s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.309936523s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096493721s) [4,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [4,5,0], acting [3,2,1] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118583679s) [5,4,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.292114258s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136350632s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.309936523s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096441269s) [4,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096242905s) [5,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [5,1,3], acting [3,2,1] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118267059s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291992188s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118240356s) [2,0,1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.291992188s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096209526s) [5,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139181137s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.312988281s@ mbc={}] start_peering_interval up [3,4,2] -> [1,2,3], acting [3,4,2] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136071205s) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310058594s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,4], acting [4,5,3] -> [2,3,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136053085s) [2,3,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310058594s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096147537s) [5,3,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270141602s@ mbc={}] start_peering_interval up [3,2,1] -> [5,3,4], acting [3,2,1] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139131546s) [1,2,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.312988281s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096116066s) [5,3,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270141602s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120033264s) [2,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294067383s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,3], acting [4,5,3] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.120014191s) [2,1,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.294067383s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096314430s) [5,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270385742s@ mbc={}] start_peering_interval up [3,2,1] -> [5,4,3], acting [3,2,1] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118216515s) [1,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.292358398s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.132471085s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.306640625s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.096285820s) [5,4,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270385742s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118186951s) [1,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.292358398s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.132421494s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.306640625s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139408112s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.313720703s@ mbc={}] start_peering_interval up [3,4,2] -> [1,0,2], acting [3,4,2] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135649681s) [5,4,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310058594s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139379501s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.313720703s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135615349s) [5,4,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310058594s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.055507660s) [4,3,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.229858398s@ mbc={}] start_peering_interval up [2,3,1] -> [4,3,2], acting [2,3,1] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119586945s) [3,2,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294189453s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.119586945s) [3,2,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.294189453s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.055425644s) [4,3,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.229858398s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.140027046s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.314575195s@ mbc={}] start_peering_interval up [3,4,2] -> [5,0,1], acting [3,4,2] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.139990807s) [5,0,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.314575195s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.131933212s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.306640625s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.131896019s) [1,2,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.306640625s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118515968s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293212891s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118487358s) [0,1,2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.293212891s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.14( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135615349s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310424805s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,0], acting [4,5,3] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.14( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135581970s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310424805s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118191719s) [4,2,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.293090820s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128900528s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.303833008s@ mbc={}] start_peering_interval up [3,4,2] -> [4,3,5], acting [3,4,2] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.118124962s) [4,2,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.293090820s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.128869057s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.303833008s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135725021s) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310791016s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135690689s) [2,3,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310791016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.055207253s) [5,1,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.230468750s@ mbc={}] start_peering_interval up [2,3,1] -> [5,1,3], acting [2,3,1] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.055175781s) [5,1,3] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.230468750s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135288239s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310546875s@ mbc={}] start_peering_interval up [3,4,2] -> [5,1,3], acting [3,4,2] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.16( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135474205s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310791016s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,0], acting [4,5,3] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135243416s) [5,1,3] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310546875s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.16( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.135448456s) [4,5,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310791016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.094569206s) [5,0,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.270019531s@ mbc={}] start_peering_interval up [3,2,1] -> [5,0,4], acting [3,2,1] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.111744881s) [1,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.287353516s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.094530106s) [5,0,4] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.270019531s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115993500s) [3,1,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291503906s@ mbc={}] start_peering_interval up [4,5,3] -> [3,1,5], acting [4,5,3] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136502266s) [5,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.312133789s@ mbc={}] start_peering_interval up [3,4,2] -> [5,4,0], acting [3,4,2] -> [5,4,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.111690521s) [1,2,3] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.287353516s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115993500s) [3,1,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.291503906s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136469841s) [5,4,0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.312133789s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134595871s) [3,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310424805s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134595871s) [3,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.310424805s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115891457s) [3,4,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291748047s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115891457s) [3,4,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.291748047s@ mbc={}] state: transitioning to Primary Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115837097s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291992188s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,2], acting [4,5,3] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.10( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.134006500s) [4,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310180664s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,2], acting [4,5,3] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115805626s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.291992188s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.11( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133998871s) [4,3,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310302734s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.10( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133945465s) [4,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310180664s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136389732s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.312622070s@ mbc={}] start_peering_interval up [3,4,2] -> [4,3,5], acting [3,4,2] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.11( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133961678s) [4,3,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310302734s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.136347771s) [4,3,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.312622070s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115361214s) [2,3,1] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291748047s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115324020s) [2,3,1] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.291748047s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133200645s) [0,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.309814453s@ mbc={}] start_peering_interval up [4,5,3] -> [0,2,1], acting [4,5,3] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114918709s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291748047s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,2], acting [4,5,3] -> [1,3,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.051041603s) [5,3,1] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.227905273s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,1], acting [2,3,1] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114889145s) [1,3,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.291748047s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.050918579s) [5,3,1] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.227905273s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.051724434s) [5,3,4] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1189.228881836s@ mbc={}] start_peering_interval up [2,3,1] -> [5,3,4], acting [2,3,1] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=10.051695824s) [5,3,4] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.228881836s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115150452s) [1,3,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.292480469s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,5], acting [4,5,3] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133167267s) [0,2,1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.309814453s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114214897s) [4,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.291503906s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,0], acting [4,5,3] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.115121841s) [1,3,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.292480469s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.13( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.132878304s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310180664s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.13( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.132827759s) [1,0,2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1193.310180664s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.114165306s) [4,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.291503906s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133077621s) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 1193.310546875s@ mbc={}] start_peering_interval up [4,5,3] -> [3,1,5], acting [4,5,3] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.116396904s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 1191.294067383s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,0], acting [4,5,3] -> [1,5,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.116355896s) [1,5,0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.294067383s@ mbc={}] state: transitioning to Stray Oct 5 04:04:44 localhost ceph-osd[32468]: osd.3 pg_epoch: 43 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=14.133077621s) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1193.310546875s@ mbc={}] state: transitioning to Primary Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,5,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,0,4] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,0,2] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [1,0,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [1,0,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,2,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [1,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [1,5,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201883316s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1199.793823242s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201592445s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1199.793457031s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201544762s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.793457031s@ mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201817513s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.793823242s@ mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.2( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201761246s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1199.794067383s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.2( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201730728s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.794067383s@ mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201328278s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1199.793945312s@ mbc={}] start_peering_interval up [5,0,4] -> [4,3,5], acting [5,0,4] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=15.201264381s) [4,3,5] r=-1 lpr=44 pi=[41,44)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.793945312s@ mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [5,1,0] r=2 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [2,0,1] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [2,0,1] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,0,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,1,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,4,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,0,1] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,4,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [2,4,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [5,0,1] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,0,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,1,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,0,1] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,4,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,0,4] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [2,0,1] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [5,4,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,4,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [5,0,1] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[2.1e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [4,0,5] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,5,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,1,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,1,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,1,5] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[6.1b( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,2,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,0,5] r=1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,5] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,5,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,0,2] r=1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=0/0 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[5.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [4,5,0] r=2 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [4,2,0] r=2 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,1,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,4,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[6.1e( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [3,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,2,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[6.1c( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,1,5] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,5,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,2,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[6.17( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,4,2] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [3,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [3,4,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,2,1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[3.1a( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,1,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=43/44 n=0 ec=39/30 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,2,1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,1,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,5,1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,1,2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,2,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,4,5] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=43) [0,5,4] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,5,4] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:45 localhost ceph-osd[31524]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=43) [0,2,4] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:04:46 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:46 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:46 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:46 localhost ceph-osd[32468]: osd.3 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=44) [4,3,5] r=1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:49 localhost python3[57386]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:04:51 localhost systemd[1]: tmp-crun.dz2a1q.mount: Deactivated successfully. Oct 5 04:04:51 localhost podman[57403]: 2025-10-05 08:04:51.455563354 +0000 UTC m=+0.087728054 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, config_id=tripleo_step1, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=) Oct 5 04:04:51 localhost python3[57402]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:51 localhost podman[57403]: 2025-10-05 08:04:51.636662142 +0000 UTC m=+0.268826802 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, tcib_managed=true) Oct 5 04:04:51 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:04:53 localhost python3[57447]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:53 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.15 scrub starts Oct 5 04:04:53 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.9 scrub starts Oct 5 04:04:56 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1d scrub starts Oct 5 04:04:56 localhost python3[57495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:04:57 localhost python3[57538]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651496.420319-93302-253640738701209/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=d68e0db228a7d8458c08a66635a19e112f8e9d34 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.841607094s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1208.622680664s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.842153549s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1208.623291016s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.841517448s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1208.622680664s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.842101097s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1208.623291016s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.841526031s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1208.622680664s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.3( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.841432571s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1208.622680664s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.841302872s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1208.623046875s@ mbc={}] start_peering_interval up [2,0,4] -> [1,2,3], acting [2,0,4] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:57 localhost ceph-osd[31524]: osd.0 pg_epoch: 46 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46 pruub=11.841201782s) [1,2,3] r=-1 lpr=46 pi=[43,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1208.623046875s@ mbc={}] state: transitioning to Stray Oct 5 04:04:57 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.15 scrub ok Oct 5 04:04:59 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.0 scrub starts Oct 5 04:04:59 localhost ceph-osd[32468]: osd.3 pg_epoch: 46 pg[7.b( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=2 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[32468]: osd.3 pg_epoch: 46 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=2 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[32468]: osd.3 pg_epoch: 46 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=2 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[32468]: osd.3 pg_epoch: 46 pg[7.7( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=46) [1,2,3] r=2 lpr=46 pi=[43,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.0 scrub ok Oct 5 04:04:59 localhost ceph-osd[31524]: osd.0 pg_epoch: 48 pg[7.4( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.962707520s) [1,5,0] r=2 lpr=48 pi=[41,48)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1207.794433594s@ mbc={}] start_peering_interval up [5,0,4] -> [1,5,0], acting [5,0,4] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:59 localhost ceph-osd[31524]: osd.0 pg_epoch: 48 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.962619781s) [1,5,0] r=2 lpr=48 pi=[41,48)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1207.794433594s@ mbc={}] start_peering_interval up [5,0,4] -> [1,5,0], acting [5,0,4] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:04:59 localhost ceph-osd[31524]: osd.0 pg_epoch: 48 pg[7.4( v 33'39 (0'0,33'39] local-lis/les=41/42 n=2 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.962595940s) [1,5,0] r=2 lpr=48 pi=[41,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.794433594s@ mbc={}] state: transitioning to Stray Oct 5 04:04:59 localhost ceph-osd[31524]: osd.0 pg_epoch: 48 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=8.962557793s) [1,5,0] r=2 lpr=48 pi=[41,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.794433594s@ mbc={}] state: transitioning to Stray Oct 5 04:05:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:05:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4015 writes, 19K keys, 4015 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4015 writes, 310 syncs, 12.95 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 624 writes, 2497 keys, 624 commit groups, 1.0 writes per commit group, ingest: 1.24 MB, 0.00 MB/s#012Interval WAL: 624 writes, 111 syncs, 5.62 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Oct 5 04:05:00 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.d scrub starts Oct 5 04:05:01 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.d scrub ok Oct 5 04:05:01 localhost ceph-osd[32468]: osd.3 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50) [3,4,2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:01 localhost ceph-osd[32468]: osd.3 pg_epoch: 50 pg[7.d( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50) [3,4,2] r=0 lpr=50 pi=[43,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:01 localhost ceph-osd[31524]: osd.0 pg_epoch: 50 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.733035088s) [3,4,2] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1216.623168945s@ mbc={}] start_peering_interval up [2,0,4] -> [3,4,2], acting [2,0,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:01 localhost ceph-osd[31524]: osd.0 pg_epoch: 50 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.732202530s) [3,4,2] r=-1 lpr=50 pi=[43,50)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1216.622680664s@ mbc={}] start_peering_interval up [2,0,4] -> [3,4,2], acting [2,0,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:01 localhost ceph-osd[31524]: osd.0 pg_epoch: 50 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.732099533s) [3,4,2] r=-1 lpr=50 pi=[43,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1216.622680664s@ mbc={}] state: transitioning to Stray Oct 5 04:05:01 localhost ceph-osd[31524]: osd.0 pg_epoch: 50 pg[7.5( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50 pruub=15.732929230s) [3,4,2] r=-1 lpr=50 pi=[43,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1216.623168945s@ mbc={}] state: transitioning to Stray Oct 5 04:05:01 localhost python3[57600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:01 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts Oct 5 04:05:01 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok Oct 5 04:05:01 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.e scrub starts Oct 5 04:05:02 localhost python3[57643]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651501.60371-93302-50402394564467/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=e73c1aa4a58d9801d80c3db0f6e886adadfd04c0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:02 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.e scrub ok Oct 5 04:05:02 localhost ceph-osd[32468]: osd.3 pg_epoch: 51 pg[7.5( v 33'39 lc 33'6 (0'0,33'39] local-lis/les=50/51 n=2 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50) [3,4,2] r=0 lpr=50 pi=[43,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:02 localhost ceph-osd[32468]: osd.3 pg_epoch: 51 pg[7.d( v 33'39 lc 33'7 (0'0,33'39] local-lis/les=50/51 n=1 ec=41/31 lis/c=43/43 les/c/f=44/45/0 sis=50) [3,4,2] r=0 lpr=50 pi=[43,50)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Oct 5 04:05:04 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.12 scrub starts Oct 5 04:05:04 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.12 scrub ok Oct 5 04:05:04 localhost ceph-osd[32468]: osd.3 pg_epoch: 52 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.109498978s) [1,2,3] r=2 lpr=52 pi=[44,52)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1213.232666016s@ mbc={}] start_peering_interval up [4,3,5] -> [1,2,3], acting [4,3,5] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:04 localhost ceph-osd[32468]: osd.3 pg_epoch: 52 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.109426498s) [1,2,3] r=2 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1213.232666016s@ mbc={}] state: transitioning to Stray Oct 5 04:05:04 localhost ceph-osd[32468]: osd.3 pg_epoch: 52 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.109051704s) [1,2,3] r=2 lpr=52 pi=[44,52)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1213.233032227s@ mbc={}] start_peering_interval up [4,3,5] -> [1,2,3], acting [4,3,5] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:04 localhost ceph-osd[32468]: osd.3 pg_epoch: 52 pg[7.6( v 33'39 (0'0,33'39] local-lis/les=44/45 n=2 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=14.108999252s) [1,2,3] r=2 lpr=52 pi=[44,52)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1213.233032227s@ mbc={}] state: transitioning to Stray Oct 5 04:05:05 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.13 scrub starts Oct 5 04:05:05 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.13 scrub ok Oct 5 04:05:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:05:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4836 writes, 22K keys, 4836 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4836 writes, 376 syncs, 12.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1589 writes, 6122 keys, 1589 commit groups, 1.0 writes per commit group, ingest: 2.24 MB, 0.00 MB/s#012Interval WAL: 1589 writes, 237 syncs, 6.70 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Oct 5 04:05:06 localhost ceph-osd[32468]: osd.3 pg_epoch: 54 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.685124397s) [2,0,1] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1209.841796875s@ mbc={}] start_peering_interval up [1,2,3] -> [2,0,1], acting [1,2,3] -> [2,0,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:06 localhost ceph-osd[32468]: osd.3 pg_epoch: 54 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.685056686s) [2,0,1] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1209.841796875s@ mbc={}] start_peering_interval up [1,2,3] -> [2,0,1], acting [1,2,3] -> [2,0,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:06 localhost ceph-osd[32468]: osd.3 pg_epoch: 54 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.685042381s) [2,0,1] r=-1 lpr=54 pi=[46,54)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1209.841796875s@ mbc={}] state: transitioning to Stray Oct 5 04:05:06 localhost ceph-osd[32468]: osd.3 pg_epoch: 54 pg[7.7( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=8.684971809s) [2,0,1] r=-1 lpr=54 pi=[46,54)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1209.841796875s@ mbc={}] state: transitioning to Stray Oct 5 04:05:07 localhost python3[57705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:07 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts Oct 5 04:05:07 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok Oct 5 04:05:07 localhost ceph-osd[31524]: osd.0 pg_epoch: 54 pg[7.7( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54) [2,0,1] r=1 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:07 localhost ceph-osd[31524]: osd.0 pg_epoch: 55 pg[7.8( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=9.211732864s) [1,2,3] r=-1 lpr=55 pi=[41,55)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1215.795043945s@ mbc={}] start_peering_interval up [5,0,4] -> [1,2,3], acting [5,0,4] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:07 localhost ceph-osd[31524]: osd.0 pg_epoch: 55 pg[7.8( v 33'39 (0'0,33'39] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=9.211642265s) [1,2,3] r=-1 lpr=55 pi=[41,55)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1215.795043945s@ mbc={}] state: transitioning to Stray Oct 5 04:05:07 localhost ceph-osd[31524]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=46/46 les/c/f=47/47/0 sis=54) [2,0,1] r=1 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:07 localhost python3[57748]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651506.763964-93302-142495179568410/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=83e4275d7d1daa1c790a878bb63e3d5916f491b2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:09 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts Oct 5 04:05:09 localhost ceph-osd[32468]: osd.3 pg_epoch: 55 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=55) [1,2,3] r=2 lpr=55 pi=[41,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:09 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok Oct 5 04:05:09 localhost ceph-osd[31524]: osd.0 pg_epoch: 57 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=15.615558624s) [4,0,2] r=1 lpr=57 pi=[43,57)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1224.623535156s@ mbc={}] start_peering_interval up [2,0,4] -> [4,0,2], acting [2,0,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:09 localhost ceph-osd[31524]: osd.0 pg_epoch: 57 pg[7.9( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=41/31 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=15.615465164s) [4,0,2] r=1 lpr=57 pi=[43,57)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1224.623535156s@ mbc={}] state: transitioning to Stray Oct 5 04:05:10 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.1d scrub starts Oct 5 04:05:10 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.1d scrub ok Oct 5 04:05:11 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.a scrub starts Oct 5 04:05:11 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.a scrub ok Oct 5 04:05:11 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.8 scrub starts Oct 5 04:05:11 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.8 scrub ok Oct 5 04:05:11 localhost python3[57810]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:12 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.c scrub starts Oct 5 04:05:12 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.c scrub ok Oct 5 04:05:12 localhost python3[57855]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651511.5291088-93822-147443940102999/source _original_basename=tmp62wp0_36 follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:13 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.e scrub starts Oct 5 04:05:13 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1e scrub starts Oct 5 04:05:13 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.e scrub ok Oct 5 04:05:13 localhost python3[57917]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:13 localhost python3[57960]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651513.220987-93969-8032922059804/source _original_basename=tmpn4hh5wu2 follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:14 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.18 scrub starts Oct 5 04:05:14 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.18 scrub ok Oct 5 04:05:14 localhost python3[57990]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Oct 5 04:05:14 localhost python3[58008]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:05:16 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.1b scrub starts Oct 5 04:05:16 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.12 scrub starts Oct 5 04:05:16 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.1b scrub ok Oct 5 04:05:16 localhost ansible-async_wrapper.py[58180]: Invoked with 633915799845 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651515.8895419-94117-280934438805012/AnsiballZ_command.py _ Oct 5 04:05:16 localhost ansible-async_wrapper.py[58183]: Starting module and watcher Oct 5 04:05:16 localhost ansible-async_wrapper.py[58183]: Start watching 58184 (3600) Oct 5 04:05:16 localhost ansible-async_wrapper.py[58184]: Start module (58184) Oct 5 04:05:16 localhost ansible-async_wrapper.py[58180]: Return async_wrapper task started. Oct 5 04:05:16 localhost python3[58204]: ansible-ansible.legacy.async_status Invoked with jid=633915799845.58180 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:05:18 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts Oct 5 04:05:18 localhost ceph-osd[32468]: osd.3 pg_epoch: 59 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=59 pruub=8.252748489s) [3,4,5] r=0 lpr=59 pi=[44,59)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1221.232910156s@ mbc={}] start_peering_interval up [4,3,5] -> [3,4,5], acting [4,3,5] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:18 localhost ceph-osd[32468]: osd.3 pg_epoch: 59 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=44/45 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=59 pruub=8.252748489s) [3,4,5] r=0 lpr=59 pi=[44,59)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown pruub 1221.232910156s@ mbc={}] state: transitioning to Primary Oct 5 04:05:18 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok Oct 5 04:05:19 localhost ceph-osd[32468]: osd.3 pg_epoch: 60 pg[7.a( v 33'39 (0'0,33'39] local-lis/les=59/60 n=1 ec=41/31 lis/c=44/44 les/c/f=45/45/0 sis=59) [3,4,5] r=0 lpr=59 pi=[44,59)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:05:20 localhost ceph-osd[32468]: osd.3 pg_epoch: 61 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/48/0 sis=61 pruub=11.037599564s) [1,2,0] r=-1 lpr=61 pi=[46,61)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1225.842285156s@ mbc={}] start_peering_interval up [1,2,3] -> [1,2,0], acting [1,2,3] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:20 localhost ceph-osd[32468]: osd.3 pg_epoch: 61 pg[7.b( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=41/31 lis/c=46/46 les/c/f=47/48/0 sis=61 pruub=11.037528992s) [1,2,0] r=-1 lpr=61 pi=[46,61)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1225.842285156s@ mbc={}] state: transitioning to Stray Oct 5 04:05:20 localhost puppet-user[58202]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:05:20 localhost puppet-user[58202]: (file: /etc/puppet/hiera.yaml) Oct 5 04:05:20 localhost puppet-user[58202]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:05:20 localhost puppet-user[58202]: (file & line not available) Oct 5 04:05:20 localhost puppet-user[58202]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:05:20 localhost puppet-user[58202]: (file & line not available) Oct 5 04:05:20 localhost puppet-user[58202]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:05:20 localhost puppet-user[58202]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:05:20 localhost puppet-user[58202]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.11 seconds Oct 5 04:05:20 localhost puppet-user[58202]: Notice: Applied catalog in 0.03 seconds Oct 5 04:05:20 localhost puppet-user[58202]: Application: Oct 5 04:05:20 localhost puppet-user[58202]: Initial environment: production Oct 5 04:05:20 localhost puppet-user[58202]: Converged environment: production Oct 5 04:05:20 localhost puppet-user[58202]: Run mode: user Oct 5 04:05:20 localhost puppet-user[58202]: Changes: Oct 5 04:05:20 localhost puppet-user[58202]: Events: Oct 5 04:05:20 localhost puppet-user[58202]: Resources: Oct 5 04:05:20 localhost puppet-user[58202]: Total: 10 Oct 5 04:05:20 localhost puppet-user[58202]: Time: Oct 5 04:05:20 localhost puppet-user[58202]: Schedule: 0.00 Oct 5 04:05:20 localhost puppet-user[58202]: File: 0.00 Oct 5 04:05:20 localhost puppet-user[58202]: Exec: 0.00 Oct 5 04:05:20 localhost puppet-user[58202]: Augeas: 0.01 Oct 5 04:05:20 localhost puppet-user[58202]: Transaction evaluation: 0.02 Oct 5 04:05:20 localhost puppet-user[58202]: Catalog application: 0.03 Oct 5 04:05:20 localhost puppet-user[58202]: Config retrieval: 0.15 Oct 5 04:05:20 localhost puppet-user[58202]: Last run: 1759651520 Oct 5 04:05:20 localhost puppet-user[58202]: Filebucket: 0.00 Oct 5 04:05:20 localhost puppet-user[58202]: Total: 0.04 Oct 5 04:05:20 localhost puppet-user[58202]: Version: Oct 5 04:05:20 localhost puppet-user[58202]: Config: 1759651520 Oct 5 04:05:20 localhost puppet-user[58202]: Puppet: 7.10.0 Oct 5 04:05:20 localhost ansible-async_wrapper.py[58184]: Module complete (58184) Oct 5 04:05:21 localhost ceph-osd[31524]: osd.0 pg_epoch: 61 pg[7.b( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=46/46 les/c/f=47/48/0 sis=61) [1,2,0] r=2 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:21 localhost ansible-async_wrapper.py[58183]: Done in kid B. Oct 5 04:05:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:05:21 localhost systemd[1]: tmp-crun.FO4Wbc.mount: Deactivated successfully. Oct 5 04:05:21 localhost podman[58315]: 2025-10-05 08:05:21.921926076 +0000 UTC m=+0.091574958 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.9, release=1) Oct 5 04:05:22 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.1a scrub starts Oct 5 04:05:22 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.1a scrub ok Oct 5 04:05:22 localhost ceph-osd[31524]: osd.0 pg_epoch: 63 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=41/31 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=10.614815712s) [2,1,3] r=-1 lpr=63 pi=[48,63)/1 luod=0'0 crt=33'39 lcod 0'0 mlcod 0'0 active pruub 1231.884033203s@ mbc={}] start_peering_interval up [1,5,0] -> [2,1,3], acting [1,5,0] -> [2,1,3], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:22 localhost ceph-osd[31524]: osd.0 pg_epoch: 63 pg[7.c( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=41/31 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=10.614757538s) [2,1,3] r=-1 lpr=63 pi=[48,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1231.884033203s@ mbc={}] state: transitioning to Stray Oct 5 04:05:22 localhost podman[58315]: 2025-10-05 08:05:22.144946478 +0000 UTC m=+0.314595320 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.9, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team) Oct 5 04:05:22 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:05:22 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.11 scrub starts Oct 5 04:05:22 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.11 scrub ok Oct 5 04:05:23 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.1c scrub starts Oct 5 04:05:23 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.1c scrub ok Oct 5 04:05:23 localhost ceph-osd[32468]: osd.3 pg_epoch: 63 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=48/48 les/c/f=49/49/0 sis=63) [2,1,3] r=2 lpr=63 pi=[48,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:24 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.8 scrub starts Oct 5 04:05:24 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.8 scrub ok Oct 5 04:05:24 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.5 scrub starts Oct 5 04:05:24 localhost ceph-osd[32468]: osd.3 pg_epoch: 65 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=41/31 lis/c=50/50 les/c/f=51/51/0 sis=65 pruub=10.389051437s) [2,4,0] r=-1 lpr=65 pi=[50,65)/1 crt=33'39 mlcod 0'0 active pruub 1229.520751953s@ mbc={255={}}] start_peering_interval up [3,4,2] -> [2,4,0], acting [3,4,2] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:24 localhost ceph-osd[32468]: osd.3 pg_epoch: 65 pg[7.d( v 33'39 (0'0,33'39] local-lis/les=50/51 n=1 ec=41/31 lis/c=50/50 les/c/f=51/51/0 sis=65 pruub=10.388937950s) [2,4,0] r=-1 lpr=65 pi=[50,65)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1229.520751953s@ mbc={}] state: transitioning to Stray Oct 5 04:05:24 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.5 scrub ok Oct 5 04:05:26 localhost ceph-osd[31524]: osd.0 pg_epoch: 65 pg[7.d( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=50/50 les/c/f=51/51/0 sis=65) [2,4,0] r=2 lpr=65 pi=[50,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:26 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.b scrub starts Oct 5 04:05:26 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.b scrub ok Oct 5 04:05:26 localhost ceph-osd[32468]: osd.3 pg_epoch: 67 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=41/31 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.999356270s) [1,0,5] r=-1 lpr=67 pi=[52,67)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1232.165771484s@ mbc={}] start_peering_interval up [1,2,3] -> [1,0,5], acting [1,2,3] -> [1,0,5], acting_primary 1 -> 1, up_primary 1 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:26 localhost ceph-osd[32468]: osd.3 pg_epoch: 67 pg[7.e( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=41/31 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.999252319s) [1,0,5] r=-1 lpr=67 pi=[52,67)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1232.165771484s@ mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost python3[58488]: ansible-ansible.legacy.async_status Invoked with jid=633915799845.58180 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 68 crush map has features 432629239337189376, adjusting msgr requires for clients Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 68 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 68 crush map has features 3314933000854323200, adjusting msgr requires for osds Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 pg_epoch: 68 pg[4.f( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.020506859s) [4,3,5] r=1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 active pruub 1236.197143555s@ mbc={}] start_peering_interval up [1,3,5] -> [4,3,5], acting [1,3,5] -> [4,3,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 pg_epoch: 68 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=37/37 les/c/f=38/38/0 sis=68 pruub=9.094978333s) [0,2,1] r=-1 lpr=68 pi=[37,68)/1 crt=0'0 mlcod 0'0 active pruub 1231.271606445s@ mbc={}] start_peering_interval up [3,2,1] -> [0,2,1], acting [3,2,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 pg_epoch: 68 pg[4.f( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.020453453s) [4,3,5] r=1 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1236.197143555s@ mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost ceph-osd[32468]: osd.3 pg_epoch: 68 pg[3.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=37/37 les/c/f=38/38/0 sis=68 pruub=9.094853401s) [0,2,1] r=-1 lpr=68 pi=[37,68)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1231.271606445s@ mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 68 crush map has features 432629239337189376, adjusting msgr requires for clients Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 68 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 68 crush map has features 3314933000854323200, adjusting msgr requires for osds Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 pg_epoch: 68 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=37/37 les/c/f=38/38/0 sis=68) [0,2,1] r=0 lpr=68 pi=[37,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 pg_epoch: 68 pg[4.6( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.035918236s) [0,4,2] r=0 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 active pruub 1240.626708984s@ mbc={}] start_peering_interval up [0,1,2] -> [0,4,2], acting [0,1,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 pg_epoch: 68 pg[4.6( empty local-lis/les=43/44 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68 pruub=14.035918236s) [0,4,2] r=0 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 unknown pruub 1240.626708984s@ mbc={}] state: transitioning to Primary Oct 5 04:05:27 localhost ceph-osd[31524]: osd.0 pg_epoch: 67 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=52/52 les/c/f=53/53/0 sis=67) [1,0,5] r=1 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:27 localhost python3[58504]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:05:27 localhost python3[58520]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:05:28 localhost ceph-osd[31524]: osd.0 pg_epoch: 69 pg[3.0( empty local-lis/les=68/69 n=0 ec=20/20 lis/c=37/37 les/c/f=38/38/0 sis=68) [0,2,1] r=0 lpr=68 pi=[37,68)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:05:28 localhost ceph-osd[31524]: osd.0 pg_epoch: 69 pg[4.6( empty local-lis/les=68/69 n=0 ec=37/22 lis/c=43/43 les/c/f=44/44/0 sis=68) [0,4,2] r=0 lpr=68 pi=[43,68)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Oct 5 04:05:28 localhost python3[58570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:28 localhost python3[58588]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpbmtbe9iz recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:05:29 localhost python3[58618]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:30 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.b scrub starts Oct 5 04:05:30 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.b scrub ok Oct 5 04:05:30 localhost python3[58722]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:05:31 localhost python3[58741]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:32 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.1b scrub starts Oct 5 04:05:32 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 2.1b scrub ok Oct 5 04:05:32 localhost python3[58773]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:05:32 localhost python3[58823]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:33 localhost python3[58841]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:33 localhost python3[58903]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:33 localhost python3[58921]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:34 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.4 scrub starts Oct 5 04:05:34 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.4 scrub ok Oct 5 04:05:34 localhost python3[58983]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:34 localhost ceph-osd[31524]: osd.0 pg_epoch: 70 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=41/31 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=13.031370163s) [1,3,5] r=-1 lpr=70 pi=[54,70)/1 luod=0'0 crt=33'39 mlcod 0'0 active pruub 1246.597290039s@ mbc={}] start_peering_interval up [2,0,1] -> [1,3,5], acting [2,0,1] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Oct 5 04:05:34 localhost ceph-osd[31524]: osd.0 pg_epoch: 70 pg[7.f( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=41/31 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=13.031263351s) [1,3,5] r=-1 lpr=70 pi=[54,70)/1 crt=33'39 mlcod 0'0 unknown NOTIFY pruub 1246.597290039s@ mbc={}] state: transitioning to Stray Oct 5 04:05:34 localhost python3[59001]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:35 localhost python3[59063]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:35 localhost python3[59081]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:35 localhost ceph-osd[32468]: osd.3 pg_epoch: 70 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=54/54 les/c/f=55/55/0 sis=70) [1,3,5] r=1 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Oct 5 04:05:35 localhost python3[59111]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:05:35 localhost systemd[1]: Reloading. Oct 5 04:05:36 localhost systemd-rc-local-generator[59133]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:05:36 localhost systemd-sysv-generator[59136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:05:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:05:36 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.1a scrub starts Oct 5 04:05:36 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 5.1a scrub ok Oct 5 04:05:36 localhost python3[59196]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:37 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1 scrub starts Oct 5 04:05:37 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1 scrub ok Oct 5 04:05:37 localhost python3[59214]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:37 localhost python3[59276]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:05:37 localhost python3[59294]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:38 localhost python3[59324]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:05:38 localhost systemd[1]: Reloading. Oct 5 04:05:38 localhost systemd-sysv-generator[59353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:05:38 localhost systemd-rc-local-generator[59348]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:05:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:05:38 localhost systemd[1]: Starting Create netns directory... Oct 5 04:05:38 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:05:38 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:05:38 localhost systemd[1]: Finished Create netns directory. Oct 5 04:05:39 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts Oct 5 04:05:39 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok Oct 5 04:05:39 localhost python3[59382]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:05:41 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.2 scrub starts Oct 5 04:05:41 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.2 scrub ok Oct 5 04:05:41 localhost python3[59441]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:05:41 localhost podman[59522]: 2025-10-05 08:05:41.586309955 +0000 UTC m=+0.095940857 container create c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1, maintainer=OpenStack TripleO Team) Oct 5 04:05:41 localhost podman[59528]: 2025-10-05 08:05:41.603300098 +0000 UTC m=+0.100211304 container create 05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, vcs-type=git, container_name=nova_virtqemud_init_logs, config_id=tripleo_step2, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, architecture=x86_64, release=2, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible) Oct 5 04:05:41 localhost podman[59522]: 2025-10-05 08:05:41.53481356 +0000 UTC m=+0.044444492 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:05:41 localhost systemd[1]: Started libpod-conmon-05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7.scope. Oct 5 04:05:41 localhost podman[59528]: 2025-10-05 08:05:41.546032866 +0000 UTC m=+0.042944072 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:05:41 localhost systemd[1]: Started libpod-conmon-c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db.scope. Oct 5 04:05:41 localhost systemd[1]: Started libcrun container. Oct 5 04:05:41 localhost systemd[1]: Started libcrun container. Oct 5 04:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14165343956b68f6adce0a282bc9a68a91e1d66b2adbe87d958d61d99ad6d3d8/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56fd012cd53db160963f1dee06cf4da8c3422d34817ca588642ef798e92735f5/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:41 localhost podman[59522]: 2025-10-05 08:05:41.678216882 +0000 UTC m=+0.187847784 container init c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, release=1, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_id=tripleo_step2, distribution-scope=public) Oct 5 04:05:41 localhost podman[59528]: 2025-10-05 08:05:41.681503731 +0000 UTC m=+0.178414937 container init 05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, container_name=nova_virtqemud_init_logs, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, tcib_managed=true) Oct 5 04:05:41 localhost podman[59522]: 2025-10-05 08:05:41.689983402 +0000 UTC m=+0.199614304 container start c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_compute_init_log, vendor=Red Hat, Inc., config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:05:41 localhost python3[59441]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Oct 5 04:05:41 localhost systemd[1]: libpod-c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db.scope: Deactivated successfully. Oct 5 04:05:41 localhost systemd[1]: libpod-05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7.scope: Deactivated successfully. Oct 5 04:05:41 localhost podman[59528]: 2025-10-05 08:05:41.741515947 +0000 UTC m=+0.238427153 container start 05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, config_id=tripleo_step2, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-type=git, container_name=nova_virtqemud_init_logs, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:05:41 localhost python3[59441]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Oct 5 04:05:41 localhost podman[59560]: 2025-10-05 08:05:41.768060792 +0000 UTC m=+0.056972686 container died c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vcs-type=git, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute) Oct 5 04:05:41 localhost podman[59592]: 2025-10-05 08:05:41.806177251 +0000 UTC m=+0.041600386 container died 05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_virtqemud_init_logs, release=2, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12) Oct 5 04:05:41 localhost podman[59562]: 2025-10-05 08:05:41.860279607 +0000 UTC m=+0.149347985 container cleanup c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute_init_log, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, release=1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 04:05:41 localhost systemd[1]: libpod-conmon-c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db.scope: Deactivated successfully. Oct 5 04:05:41 localhost podman[59563]: 2025-10-05 08:05:41.897318696 +0000 UTC m=+0.180036130 container cleanup 05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, maintainer=OpenStack TripleO Team, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, batch=17.1_20250721.1, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:05:41 localhost systemd[1]: libpod-conmon-05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7.scope: Deactivated successfully. Oct 5 04:05:42 localhost podman[59713]: 2025-10-05 08:05:42.202938981 +0000 UTC m=+0.073772863 container create a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20250721.1, config_id=tripleo_step2, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, build-date=2025-07-21T16:28:53, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=create_haproxy_wrapper, vcs-type=git) Oct 5 04:05:42 localhost podman[59719]: 2025-10-05 08:05:42.235492479 +0000 UTC m=+0.092342950 container create adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, release=2, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, config_id=tripleo_step2, build-date=2025-07-21T14:56:59, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=create_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container) Oct 5 04:05:42 localhost systemd[1]: Started libpod-conmon-a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043.scope. Oct 5 04:05:42 localhost systemd[1]: Started libcrun container. Oct 5 04:05:42 localhost systemd[1]: Started libpod-conmon-adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956.scope. Oct 5 04:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88431d359b42496c7ed4ff6b33f06da63b22b9645d8b9affaed743b1113f6ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:42 localhost podman[59713]: 2025-10-05 08:05:42.166489037 +0000 UTC m=+0.037322909 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:05:42 localhost podman[59713]: 2025-10-05 08:05:42.274854712 +0000 UTC m=+0.145688604 container init a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=create_haproxy_wrapper, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:05:42 localhost systemd[1]: Started libcrun container. Oct 5 04:05:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5030a18da58589cb9376f09d127cf9b62366340dd5dbd67fa5abee2369265346/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:05:42 localhost podman[59713]: 2025-10-05 08:05:42.283005883 +0000 UTC m=+0.153839765 container start a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=create_haproxy_wrapper) Oct 5 04:05:42 localhost podman[59713]: 2025-10-05 08:05:42.283259871 +0000 UTC m=+0.154093803 container attach a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, container_name=create_haproxy_wrapper, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:05:42 localhost podman[59719]: 2025-10-05 08:05:42.288995338 +0000 UTC m=+0.145845799 container init adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=create_virtlogd_wrapper, build-date=2025-07-21T14:56:59, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, release=2, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, version=17.1.9) Oct 5 04:05:42 localhost podman[59719]: 2025-10-05 08:05:42.192894386 +0000 UTC m=+0.049744847 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:05:42 localhost podman[59719]: 2025-10-05 08:05:42.298719992 +0000 UTC m=+0.155570463 container start adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, distribution-scope=public, container_name=create_virtlogd_wrapper, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59) Oct 5 04:05:42 localhost podman[59719]: 2025-10-05 08:05:42.299292468 +0000 UTC m=+0.156142959 container attach adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, release=2, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay-56fd012cd53db160963f1dee06cf4da8c3422d34817ca588642ef798e92735f5-merged.mount: Deactivated successfully. Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05325300cc1521576e764136ab8c587859fcccc8e6a950aa5b7de61479db01b7-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay-14165343956b68f6adce0a282bc9a68a91e1d66b2adbe87d958d61d99ad6d3d8-merged.mount: Deactivated successfully. Oct 5 04:05:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c757d7eeb9c2714ce633bd0af55a43e9c9ad0998a7a2f9a41aa71ede249996db-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:43 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.d scrub starts Oct 5 04:05:43 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.1d scrub starts Oct 5 04:05:43 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.d scrub ok Oct 5 04:05:43 localhost ovs-vsctl[59841]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Oct 5 04:05:44 localhost systemd[1]: libpod-adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956.scope: Deactivated successfully. Oct 5 04:05:44 localhost systemd[1]: libpod-adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956.scope: Consumed 2.108s CPU time. Oct 5 04:05:44 localhost podman[59719]: 2025-10-05 08:05:44.419156127 +0000 UTC m=+2.276006598 container died adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=create_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, release=2, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:05:44 localhost systemd[1]: tmp-crun.jEOzGm.mount: Deactivated successfully. Oct 5 04:05:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:44 localhost podman[59966]: 2025-10-05 08:05:44.484111788 +0000 UTC m=+0.053872950 container cleanup adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, container_name=create_virtlogd_wrapper, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, config_id=tripleo_step2, version=17.1.9) Oct 5 04:05:44 localhost systemd[1]: libpod-conmon-adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956.scope: Deactivated successfully. Oct 5 04:05:44 localhost python3[59441]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Oct 5 04:05:45 localhost systemd[1]: libpod-a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043.scope: Deactivated successfully. Oct 5 04:05:45 localhost systemd[1]: libpod-a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043.scope: Consumed 2.044s CPU time. Oct 5 04:05:45 localhost podman[59713]: 2025-10-05 08:05:45.071017654 +0000 UTC m=+2.941851646 container died a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, container_name=create_haproxy_wrapper, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 5 04:05:45 localhost podman[60004]: 2025-10-05 08:05:45.127878104 +0000 UTC m=+0.050997011 container cleanup a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1) Oct 5 04:05:45 localhost systemd[1]: libpod-conmon-a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043.scope: Deactivated successfully. Oct 5 04:05:45 localhost python3[59441]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Oct 5 04:05:45 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.9 scrub starts Oct 5 04:05:45 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.9 scrub ok Oct 5 04:05:45 localhost systemd[1]: var-lib-containers-storage-overlay-5030a18da58589cb9376f09d127cf9b62366340dd5dbd67fa5abee2369265346-merged.mount: Deactivated successfully. Oct 5 04:05:45 localhost systemd[1]: var-lib-containers-storage-overlay-a88431d359b42496c7ed4ff6b33f06da63b22b9645d8b9affaed743b1113f6ea-merged.mount: Deactivated successfully. Oct 5 04:05:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7c6c2b31a5efb1ec3db55865a903dc948611e47e525c7817d33bad2be1b3043-userdata-shm.mount: Deactivated successfully. Oct 5 04:05:45 localhost python3[60058]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:46 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.1c scrub starts Oct 5 04:05:46 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.1c scrub ok Oct 5 04:05:47 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts Oct 5 04:05:47 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok Oct 5 04:05:47 localhost python3[60179]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005471152 step=2 update_config_hash_only=False Oct 5 04:05:47 localhost python3[60195]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:05:48 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 6.12 scrub starts Oct 5 04:05:48 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 6.12 scrub ok Oct 5 04:05:48 localhost python3[60211]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:05:52 localhost podman[60212]: 2025-10-05 08:05:52.921619321 +0000 UTC m=+0.086732196 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, config_id=tripleo_step1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, release=1, container_name=metrics_qdr, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:05:53 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.0 scrub starts Oct 5 04:05:53 localhost podman[60212]: 2025-10-05 08:05:53.187355378 +0000 UTC m=+0.352468303 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:05:53 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:05:53 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 3.0 scrub ok Oct 5 04:05:55 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.17 scrub starts Oct 5 04:05:55 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.17 scrub ok Oct 5 04:05:56 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts Oct 5 04:05:56 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok Oct 5 04:05:57 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts Oct 5 04:05:57 localhost ceph-osd[31524]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok Oct 5 04:05:59 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.15 scrub starts Oct 5 04:05:59 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.15 scrub ok Oct 5 04:06:00 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1b scrub starts Oct 5 04:06:00 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1b scrub ok Oct 5 04:06:01 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.19 scrub starts Oct 5 04:06:01 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.19 scrub ok Oct 5 04:06:04 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.1f scrub starts Oct 5 04:06:05 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.1f scrub ok Oct 5 04:06:05 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.1 scrub starts Oct 5 04:06:05 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.1 scrub ok Oct 5 04:06:13 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.14 scrub starts Oct 5 04:06:14 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 4.14 scrub ok Oct 5 04:06:14 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1c deep-scrub starts Oct 5 04:06:14 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1c deep-scrub ok Oct 5 04:06:15 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 7.5 scrub starts Oct 5 04:06:16 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 7.5 scrub ok Oct 5 04:06:18 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 7.a scrub starts Oct 5 04:06:18 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 7.a scrub ok Oct 5 04:06:21 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.9 scrub starts Oct 5 04:06:22 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.9 scrub ok Oct 5 04:06:22 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1d scrub starts Oct 5 04:06:22 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 3.1d scrub ok Oct 5 04:06:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:06:23 localhost podman[60242]: 2025-10-05 08:06:23.890415394 +0000 UTC m=+0.065983757 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Oct 5 04:06:24 localhost podman[60242]: 2025-10-05 08:06:24.071830682 +0000 UTC m=+0.247398985 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, version=17.1.9, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:06:24 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:06:24 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1e scrub starts Oct 5 04:06:24 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 6.1e scrub ok Oct 5 04:06:25 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.12 scrub starts Oct 5 04:06:25 localhost ceph-osd[32468]: log_channel(cluster) log [DBG] : 5.12 scrub ok Oct 5 04:06:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:06:54 localhost systemd[1]: tmp-crun.GG9kDr.mount: Deactivated successfully. Oct 5 04:06:54 localhost podman[60346]: 2025-10-05 08:06:54.909470898 +0000 UTC m=+0.080499881 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Oct 5 04:06:55 localhost podman[60346]: 2025-10-05 08:06:55.099063502 +0000 UTC m=+0.270092385 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:06:55 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:07:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:07:25 localhost systemd[1]: tmp-crun.UNGyCG.mount: Deactivated successfully. Oct 5 04:07:25 localhost podman[60391]: 2025-10-05 08:07:25.920134264 +0000 UTC m=+0.089184661 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:07:26 localhost podman[60391]: 2025-10-05 08:07:26.107318192 +0000 UTC m=+0.276368509 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, architecture=x86_64, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 5 04:07:26 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:07:26 localhost systemd[1]: tmp-crun.1pi1WY.mount: Deactivated successfully. Oct 5 04:07:26 localhost podman[60507]: 2025-10-05 08:07:26.691831225 +0000 UTC m=+0.090577301 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, RELEASE=main, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 5 04:07:26 localhost podman[60507]: 2025-10-05 08:07:26.790917501 +0000 UTC m=+0.189663517 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, release=553, name=rhceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True) Oct 5 04:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:07:56 localhost podman[60650]: 2025-10-05 08:07:56.911578308 +0000 UTC m=+0.076947791 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.33.12, container_name=metrics_qdr, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=) Oct 5 04:07:57 localhost podman[60650]: 2025-10-05 08:07:57.108202939 +0000 UTC m=+0.273572452 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, vcs-type=git, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:07:57 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:08:27 localhost systemd[1]: tmp-crun.KzS0HS.mount: Deactivated successfully. Oct 5 04:08:27 localhost podman[60677]: 2025-10-05 08:08:27.91530696 +0000 UTC m=+0.080991781 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:07:59, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:08:28 localhost podman[60677]: 2025-10-05 08:08:28.104040942 +0000 UTC m=+0.269725703 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:07:59) Oct 5 04:08:28 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:08:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:08:58 localhost podman[60782]: 2025-10-05 08:08:58.906995562 +0000 UTC m=+0.078739071 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:08:59 localhost podman[60782]: 2025-10-05 08:08:59.099672033 +0000 UTC m=+0.271415512 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git) Oct 5 04:08:59 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:09:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:09:29 localhost podman[60809]: 2025-10-05 08:09:29.898666355 +0000 UTC m=+0.071913342 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:09:30 localhost podman[60809]: 2025-10-05 08:09:30.090194754 +0000 UTC m=+0.263441731 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr) Oct 5 04:09:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:10:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:10:00 localhost systemd[1]: tmp-crun.esH7gm.mount: Deactivated successfully. Oct 5 04:10:00 localhost podman[60914]: 2025-10-05 08:10:00.920897314 +0000 UTC m=+0.088638709 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, tcib_managed=true, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible) Oct 5 04:10:01 localhost podman[60914]: 2025-10-05 08:10:01.112424794 +0000 UTC m=+0.280166129 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:10:01 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:10:15 localhost python3[60989]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:15 localhost python3[61034]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651815.2119215-100174-160706622531699/source _original_basename=tmp4mulps6x follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:16 localhost python3[61064]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:18 localhost ansible-async_wrapper.py[61236]: Invoked with 461400744940 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651818.2210789-100491-210111148204011/AnsiballZ_command.py _ Oct 5 04:10:18 localhost ansible-async_wrapper.py[61239]: Starting module and watcher Oct 5 04:10:18 localhost ansible-async_wrapper.py[61239]: Start watching 61240 (3600) Oct 5 04:10:18 localhost ansible-async_wrapper.py[61240]: Start module (61240) Oct 5 04:10:18 localhost ansible-async_wrapper.py[61236]: Return async_wrapper task started. Oct 5 04:10:19 localhost python3[61257]: ansible-ansible.legacy.async_status Invoked with jid=461400744940.61236 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:10:22 localhost puppet-user[61260]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:10:22 localhost puppet-user[61260]: (file: /etc/puppet/hiera.yaml) Oct 5 04:10:22 localhost puppet-user[61260]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:10:22 localhost puppet-user[61260]: (file & line not available) Oct 5 04:10:22 localhost puppet-user[61260]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:10:22 localhost puppet-user[61260]: (file & line not available) Oct 5 04:10:22 localhost puppet-user[61260]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:10:22 localhost puppet-user[61260]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:10:22 localhost puppet-user[61260]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.16 seconds Oct 5 04:10:22 localhost puppet-user[61260]: Notice: Applied catalog in 0.04 seconds Oct 5 04:10:22 localhost puppet-user[61260]: Application: Oct 5 04:10:22 localhost puppet-user[61260]: Initial environment: production Oct 5 04:10:22 localhost puppet-user[61260]: Converged environment: production Oct 5 04:10:22 localhost puppet-user[61260]: Run mode: user Oct 5 04:10:22 localhost puppet-user[61260]: Changes: Oct 5 04:10:22 localhost puppet-user[61260]: Events: Oct 5 04:10:22 localhost puppet-user[61260]: Resources: Oct 5 04:10:22 localhost puppet-user[61260]: Total: 10 Oct 5 04:10:22 localhost puppet-user[61260]: Time: Oct 5 04:10:22 localhost puppet-user[61260]: Schedule: 0.00 Oct 5 04:10:22 localhost puppet-user[61260]: File: 0.00 Oct 5 04:10:22 localhost puppet-user[61260]: Exec: 0.01 Oct 5 04:10:22 localhost puppet-user[61260]: Augeas: 0.01 Oct 5 04:10:22 localhost puppet-user[61260]: Transaction evaluation: 0.03 Oct 5 04:10:22 localhost puppet-user[61260]: Catalog application: 0.04 Oct 5 04:10:22 localhost puppet-user[61260]: Config retrieval: 0.19 Oct 5 04:10:22 localhost puppet-user[61260]: Last run: 1759651822 Oct 5 04:10:22 localhost puppet-user[61260]: Filebucket: 0.00 Oct 5 04:10:22 localhost puppet-user[61260]: Total: 0.05 Oct 5 04:10:22 localhost puppet-user[61260]: Version: Oct 5 04:10:22 localhost puppet-user[61260]: Config: 1759651822 Oct 5 04:10:22 localhost puppet-user[61260]: Puppet: 7.10.0 Oct 5 04:10:22 localhost ansible-async_wrapper.py[61240]: Module complete (61240) Oct 5 04:10:23 localhost ansible-async_wrapper.py[61239]: Done in kid B. Oct 5 04:10:29 localhost python3[61387]: ansible-ansible.legacy.async_status Invoked with jid=461400744940.61236 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:10:30 localhost python3[61403]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:10:30 localhost python3[61419]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:10:31 localhost podman[61469]: 2025-10-05 08:10:31.306805817 +0000 UTC m=+0.058148665 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, config_id=tripleo_step1, tcib_managed=true, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., container_name=metrics_qdr, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:10:31 localhost python3[61470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:31 localhost podman[61469]: 2025-10-05 08:10:31.515324302 +0000 UTC m=+0.266667100 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:10:31 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:10:31 localhost python3[61518]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpdt1l_mti recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:10:32 localhost python3[61589]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:33 localhost python3[61725]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:10:34 localhost python3[61744]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:35 localhost python3[61776]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:36 localhost python3[61826]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:36 localhost python3[61844]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:37 localhost python3[61906]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:37 localhost python3[61924]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:38 localhost python3[61986]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:38 localhost python3[62004]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:38 localhost python3[62066]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:39 localhost python3[62084]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:39 localhost python3[62114]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:10:39 localhost systemd[1]: Reloading. Oct 5 04:10:39 localhost systemd-rc-local-generator[62141]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:39 localhost systemd-sysv-generator[62145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:40 localhost python3[62200]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:40 localhost python3[62218]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:41 localhost python3[62280]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:10:41 localhost python3[62298]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:42 localhost python3[62328]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:10:42 localhost systemd[1]: Reloading. Oct 5 04:10:42 localhost systemd-rc-local-generator[62356]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:42 localhost systemd-sysv-generator[62360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:42 localhost systemd[1]: Starting Create netns directory... Oct 5 04:10:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:10:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:10:42 localhost systemd[1]: Finished Create netns directory. Oct 5 04:10:42 localhost python3[62386]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:10:44 localhost python3[62444]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:10:44 localhost podman[62580]: 2025-10-05 08:10:44.814989553 +0000 UTC m=+0.062863453 container create 531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:10:44 localhost podman[62603]: 2025-10-05 08:10:44.845541521 +0000 UTC m=+0.078681968 container create 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.33.12, config_id=tripleo_step3, release=2) Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8.scope. Oct 5 04:10:44 localhost podman[62631]: 2025-10-05 08:10:44.86264836 +0000 UTC m=+0.074380429 container create 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-rsyslog, version=17.1.9, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-07-21T12:58:40, io.openshift.expose-services=, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.scope. Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost podman[62633]: 2025-10-05 08:10:44.876218182 +0000 UTC m=+0.080334463 container create 1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, release=1, container_name=nova_statedir_owner, distribution-scope=public) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f399fda81bbe6240bca25723d60396a8f25e34829105df5d1e8b91fbce43961/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2231e879ead43b6a2e73a2aad2fe770af49563937e9adad8ccf7c304d6ac6ec/merged/scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost podman[62580]: 2025-10-05 08:10:44.881872577 +0000 UTC m=+0.129746477 container init 531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step3, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T15:29:47, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 5 04:10:44 localhost podman[62580]: 2025-10-05 08:10:44.782206445 +0000 UTC m=+0.030080335 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2231e879ead43b6a2e73a2aad2fe770af49563937e9adad8ccf7c304d6ac6ec/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope. Oct 5 04:10:44 localhost podman[62580]: 2025-10-05 08:10:44.890171784 +0000 UTC m=+0.138045684 container start 531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, config_id=tripleo_step3, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost systemd[1]: libpod-531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8.scope: Deactivated successfully. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost podman[62603]: 2025-10-05 08:10:44.798151412 +0000 UTC m=+0.031291859 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 04:10:44 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Oct 5 04:10:44 localhost podman[62631]: 2025-10-05 08:10:44.903491649 +0000 UTC m=+0.115223728 container init 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_id=tripleo_step3, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=rsyslog, version=17.1.9) Oct 5 04:10:44 localhost podman[62631]: 2025-10-05 08:10:44.908862907 +0000 UTC m=+0.120594986 container start 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, container_name=rsyslog, distribution-scope=public) Oct 5 04:10:44 localhost podman[62631]: 2025-10-05 08:10:44.831665751 +0000 UTC m=+0.043397820 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc.scope. Oct 5 04:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:10:44 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=c451d5e94e858df36b636f2835a46cda --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Oct 5 04:10:44 localhost podman[62603]: 2025-10-05 08:10:44.915647262 +0000 UTC m=+0.148787679 container init 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, version=17.1.9, config_id=tripleo_step3, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-collectd-container) Oct 5 04:10:44 localhost systemd[1]: Started libcrun container. Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e4045be5c881139fd829799dfaed464fba2b2ef703554c7a184a66e7396587/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e4045be5c881139fd829799dfaed464fba2b2ef703554c7a184a66e7396587/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e4045be5c881139fd829799dfaed464fba2b2ef703554c7a184a66e7396587/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:10:44 localhost podman[62633]: 2025-10-05 08:10:44.832668958 +0000 UTC m=+0.036785259 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:10:44 localhost podman[62603]: 2025-10-05 08:10:44.934582181 +0000 UTC m=+0.167722598 container start 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step3, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:10:44 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:44 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Oct 5 04:10:44 localhost podman[62648]: 2025-10-05 08:10:44.941281805 +0000 UTC m=+0.133002206 container create 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., release=2, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, container_name=nova_virtlogd_wrapper) Oct 5 04:10:44 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:10:44 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:10:44 localhost systemd[1]: Started libpod-conmon-083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd.scope. Oct 5 04:10:44 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:10:44 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:10:44 localhost systemd[1]: libpod-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:10:44 localhost podman[62692]: 2025-10-05 08:10:44.983396849 +0000 UTC m=+0.068089237 container died 531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc.) Oct 5 04:10:44 localhost podman[62648]: 2025-10-05 08:10:44.897253268 +0000 UTC m=+0.088973699 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost systemd[1]: Started libcrun container. Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost podman[62648]: 2025-10-05 08:10:45.012728643 +0000 UTC m=+0.204449044 container init 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true, build-date=2025-07-21T14:56:59, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtlogd_wrapper, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:10:45 localhost podman[62648]: 2025-10-05 08:10:45.019639862 +0000 UTC m=+0.211360253 container start 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., release=2, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, vcs-type=git, architecture=x86_64, container_name=nova_virtlogd_wrapper, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team) Oct 5 04:10:45 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost podman[62764]: 2025-10-05 08:10:45.027431906 +0000 UTC m=+0.031687349 container died 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, release=1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, build-date=2025-07-21T12:58:40, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Oct 5 04:10:45 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:45 localhost podman[62692]: 2025-10-05 08:10:45.061026487 +0000 UTC m=+0.145718875 container cleanup 531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step3, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47) Oct 5 04:10:45 localhost systemd[1]: libpod-conmon-531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8.scope: Deactivated successfully. Oct 5 04:10:45 localhost systemd[62747]: Queued start job for default target Main User Target. Oct 5 04:10:45 localhost systemd[62747]: Created slice User Application Slice. Oct 5 04:10:45 localhost systemd[62747]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:10:45 localhost systemd[62747]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:10:45 localhost systemd[62747]: Reached target Paths. Oct 5 04:10:45 localhost systemd[62747]: Reached target Timers. Oct 5 04:10:45 localhost systemd[62747]: Starting D-Bus User Message Bus Socket... Oct 5 04:10:45 localhost systemd[62747]: Starting Create User's Volatile Files and Directories... Oct 5 04:10:45 localhost podman[62724]: 2025-10-05 08:10:45.109099734 +0000 UTC m=+0.170964577 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:10:45 localhost systemd[62747]: Finished Create User's Volatile Files and Directories. Oct 5 04:10:45 localhost podman[62724]: 2025-10-05 08:10:45.117665279 +0000 UTC m=+0.179530152 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, release=2, config_id=tripleo_step3) Oct 5 04:10:45 localhost podman[62724]: unhealthy Oct 5 04:10:45 localhost systemd[62747]: Listening on D-Bus User Message Bus Socket. Oct 5 04:10:45 localhost systemd[62747]: Reached target Sockets. Oct 5 04:10:45 localhost systemd[62747]: Reached target Basic System. Oct 5 04:10:45 localhost systemd[62747]: Reached target Main User Target. Oct 5 04:10:45 localhost systemd[62747]: Startup finished in 115ms. Oct 5 04:10:45 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:10:45 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:10:45 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Failed with result 'exit-code'. Oct 5 04:10:45 localhost podman[62633]: 2025-10-05 08:10:45.13523451 +0000 UTC m=+0.339350811 container init 1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:10:45 localhost systemd[1]: Started Session c1 of User root. Oct 5 04:10:45 localhost systemd[1]: Started Session c2 of User root. Oct 5 04:10:45 localhost podman[62633]: 2025-10-05 08:10:45.148033471 +0000 UTC m=+0.352149752 container start 1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_statedir_owner, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible) Oct 5 04:10:45 localhost podman[62633]: 2025-10-05 08:10:45.148239307 +0000 UTC m=+0.352355618 container attach 1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner) Oct 5 04:10:45 localhost systemd[1]: libpod-1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[62633]: 2025-10-05 08:10:45.201061955 +0000 UTC m=+0.405178266 container died 1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=nova_statedir_owner, release=1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step3) Oct 5 04:10:45 localhost podman[62764]: 2025-10-05 08:10:45.210558854 +0000 UTC m=+0.214814257 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3) Oct 5 04:10:45 localhost systemd[1]: libpod-conmon-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: session-c1.scope: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: session-c2.scope: Deactivated successfully. Oct 5 04:10:45 localhost podman[62860]: 2025-10-05 08:10:45.321176177 +0000 UTC m=+0.105570755 container cleanup 1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, container_name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-nova-compute, config_id=tripleo_step3) Oct 5 04:10:45 localhost systemd[1]: libpod-conmon-1c3ea5787114d89b3ab5861ff5287656825be9ab024d850c3a09736d0300a8fc.scope: Deactivated successfully. Oct 5 04:10:45 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Oct 5 04:10:45 localhost podman[62963]: 2025-10-05 08:10:45.51799039 +0000 UTC m=+0.078661526 container create ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=2, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Oct 5 04:10:45 localhost systemd[1]: Started libpod-conmon-ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1.scope. Oct 5 04:10:45 localhost systemd[1]: Started libcrun container. Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987c8818be76af06807da2048ae7d1664e12d00146f4e3ab569d4620a3bc5442/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987c8818be76af06807da2048ae7d1664e12d00146f4e3ab569d4620a3bc5442/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987c8818be76af06807da2048ae7d1664e12d00146f4e3ab569d4620a3bc5442/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987c8818be76af06807da2048ae7d1664e12d00146f4e3ab569d4620a3bc5442/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost podman[62963]: 2025-10-05 08:10:45.577948113 +0000 UTC m=+0.138619249 container init ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:10:45 localhost podman[62963]: 2025-10-05 08:10:45.484617186 +0000 UTC m=+0.045288392 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost podman[62963]: 2025-10-05 08:10:45.586994862 +0000 UTC m=+0.147665998 container start ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12) Oct 5 04:10:45 localhost podman[63021]: 2025-10-05 08:10:45.679184289 +0000 UTC m=+0.071655636 container create 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, container_name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:10:45 localhost systemd[1]: Started libpod-conmon-0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa.scope. Oct 5 04:10:45 localhost systemd[1]: Started libcrun container. Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:45 localhost podman[63021]: 2025-10-05 08:10:45.742071812 +0000 UTC m=+0.134543179 container init 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=2, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd) Oct 5 04:10:45 localhost podman[63021]: 2025-10-05 08:10:45.642966916 +0000 UTC m=+0.035438293 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost podman[63021]: 2025-10-05 08:10:45.752799086 +0000 UTC m=+0.145270463 container start 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, batch=17.1_20250721.1, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, release=2, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, container_name=nova_virtsecretd) Oct 5 04:10:45 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:45 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:45 localhost systemd[1]: Started Session c3 of User root. Oct 5 04:10:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af-userdata-shm.mount: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: var-lib-containers-storage-overlay-1f399fda81bbe6240bca25723d60396a8f25e34829105df5d1e8b91fbce43961-merged.mount: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8-userdata-shm.mount: Deactivated successfully. Oct 5 04:10:45 localhost systemd[1]: session-c3.scope: Deactivated successfully. Oct 5 04:10:46 localhost podman[63160]: 2025-10-05 08:10:46.113171663 +0000 UTC m=+0.060259143 container create 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:10:46 localhost podman[63161]: 2025-10-05 08:10:46.154840944 +0000 UTC m=+0.094124190 container create 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:27:15, version=17.1.9) Oct 5 04:10:46 localhost systemd[1]: Started libpod-conmon-2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23.scope. Oct 5 04:10:46 localhost systemd[1]: Started libcrun container. Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost podman[63160]: 2025-10-05 08:10:46.077902816 +0000 UTC m=+0.024990276 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost podman[63160]: 2025-10-05 08:10:46.183621214 +0000 UTC m=+0.130708694 container init 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, distribution-scope=public, config_id=tripleo_step3, version=17.1.9) Oct 5 04:10:46 localhost systemd[1]: Started libpod-conmon-6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.scope. Oct 5 04:10:46 localhost podman[63160]: 2025-10-05 08:10:46.193110143 +0000 UTC m=+0.140197623 container start 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, release=2, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_virtnodedevd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:10:46 localhost systemd[1]: Started libcrun container. Oct 5 04:10:46 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b34dfa0926eebd9754e1c29502e939f5774c51688baaa6ab9821bcca9cd3b2/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b34dfa0926eebd9754e1c29502e939f5774c51688baaa6ab9821bcca9cd3b2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost podman[63161]: 2025-10-05 08:10:46.109524492 +0000 UTC m=+0.048807748 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 04:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:10:46 localhost podman[63161]: 2025-10-05 08:10:46.221961684 +0000 UTC m=+0.161244910 container init 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1) Oct 5 04:10:46 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:46 localhost systemd[1]: Started Session c4 of User root. Oct 5 04:10:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:10:46 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:46 localhost systemd[1]: Started Session c5 of User root. Oct 5 04:10:46 localhost systemd[1]: session-c4.scope: Deactivated successfully. Oct 5 04:10:46 localhost systemd[1]: session-c5.scope: Deactivated successfully. Oct 5 04:10:46 localhost podman[63161]: 2025-10-05 08:10:46.361969821 +0000 UTC m=+0.301253037 container start 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step3, release=1, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid) Oct 5 04:10:46 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4f35ee3aff3ccdd22a731d50021565d5 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Oct 5 04:10:46 localhost kernel: Loading iSCSI transport class v2.0-870. Oct 5 04:10:46 localhost podman[63212]: 2025-10-05 08:10:46.39951223 +0000 UTC m=+0.133289144 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:10:46 localhost podman[63212]: 2025-10-05 08:10:46.433101201 +0000 UTC m=+0.166878145 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:10:46 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:10:46 localhost podman[63339]: 2025-10-05 08:10:46.707271575 +0000 UTC m=+0.077803173 container create 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, architecture=x86_64, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-type=git, container_name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 5 04:10:46 localhost systemd[1]: Started libpod-conmon-7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028.scope. Oct 5 04:10:46 localhost podman[63339]: 2025-10-05 08:10:46.661222163 +0000 UTC m=+0.031753831 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost systemd[1]: Started libcrun container. Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:46 localhost podman[63339]: 2025-10-05 08:10:46.801939159 +0000 UTC m=+0.172470757 container init 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, container_name=nova_virtstoraged, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 04:10:46 localhost podman[63339]: 2025-10-05 08:10:46.810590196 +0000 UTC m=+0.181121794 container start 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=2, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, container_name=nova_virtstoraged, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}) Oct 5 04:10:46 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:46 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:46 localhost systemd[1]: Started Session c6 of User root. Oct 5 04:10:46 localhost systemd[1]: session-c6.scope: Deactivated successfully. Oct 5 04:10:47 localhost podman[63442]: 2025-10-05 08:10:47.212993295 +0000 UTC m=+0.096860065 container create e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:10:47 localhost podman[63442]: 2025-10-05 08:10:47.152878968 +0000 UTC m=+0.036745768 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost systemd[1]: Started libpod-conmon-e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c.scope. Oct 5 04:10:47 localhost systemd[1]: Started libcrun container. Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost podman[63442]: 2025-10-05 08:10:47.284629319 +0000 UTC m=+0.168496079 container init e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step3, io.buildah.version=1.33.12, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-type=git, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 04:10:47 localhost podman[63442]: 2025-10-05 08:10:47.294599792 +0000 UTC m=+0.178466552 container start e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:59, release=2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 04:10:47 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:47 localhost systemd[1]: Started Session c7 of User root. Oct 5 04:10:47 localhost systemd[1]: session-c7.scope: Deactivated successfully. Oct 5 04:10:47 localhost podman[63546]: 2025-10-05 08:10:47.755590706 +0000 UTC m=+0.096623919 container create 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, container_name=nova_virtproxyd, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, release=2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, vcs-type=git) Oct 5 04:10:47 localhost podman[63546]: 2025-10-05 08:10:47.70102657 +0000 UTC m=+0.042059833 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost systemd[1]: Started libpod-conmon-9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413.scope. Oct 5 04:10:47 localhost systemd[1]: Started libcrun container. Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:10:47 localhost podman[63546]: 2025-10-05 08:10:47.830340525 +0000 UTC m=+0.171373688 container init 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, build-date=2025-07-21T14:56:59, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, container_name=nova_virtproxyd, release=2, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container) Oct 5 04:10:47 localhost podman[63546]: 2025-10-05 08:10:47.84111232 +0000 UTC m=+0.182145483 container start 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.expose-services=, container_name=nova_virtproxyd, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, architecture=x86_64) Oct 5 04:10:47 localhost python3[62444]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:10:47 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:10:47 localhost systemd[1]: Started Session c8 of User root. Oct 5 04:10:47 localhost systemd[1]: session-c8.scope: Deactivated successfully. Oct 5 04:10:48 localhost python3[63630]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:48 localhost python3[63646]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:48 localhost python3[63662]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[63678]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[63694]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[63710]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:49 localhost python3[63726]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[63742]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[63758]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:50 localhost python3[63774]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:50 localhost python3[63790]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:51 localhost python3[63806]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:51 localhost python3[63822]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:51 localhost python3[63838]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63854]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63870]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63886]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:52 localhost python3[63902]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:10:53 localhost python3[63963]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:53 localhost python3[63992]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:54 localhost python3[64021]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:54 localhost python3[64050]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:55 localhost python3[64079]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:55 localhost python3[64108]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:56 localhost python3[64137]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:56 localhost python3[64166]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:57 localhost python3[64195]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759651852.8569067-101726-178344782227490/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:10:57 localhost python3[64211]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 04:10:57 localhost systemd[1]: Reloading. Oct 5 04:10:58 localhost systemd-rc-local-generator[64234]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:58 localhost systemd-sysv-generator[64237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:58 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 04:10:58 localhost systemd[62747]: Activating special unit Exit the Session... Oct 5 04:10:58 localhost systemd[62747]: Stopped target Main User Target. Oct 5 04:10:58 localhost systemd[62747]: Stopped target Basic System. Oct 5 04:10:58 localhost systemd[62747]: Stopped target Paths. Oct 5 04:10:58 localhost systemd[62747]: Stopped target Sockets. Oct 5 04:10:58 localhost systemd[62747]: Stopped target Timers. Oct 5 04:10:58 localhost systemd[62747]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:10:58 localhost systemd[62747]: Closed D-Bus User Message Bus Socket. Oct 5 04:10:58 localhost systemd[62747]: Stopped Create User's Volatile Files and Directories. Oct 5 04:10:58 localhost systemd[62747]: Removed slice User Application Slice. Oct 5 04:10:58 localhost systemd[62747]: Reached target Shutdown. Oct 5 04:10:58 localhost systemd[62747]: Finished Exit the Session. Oct 5 04:10:58 localhost systemd[62747]: Reached target Exit the Session. Oct 5 04:10:58 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 04:10:58 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 04:10:58 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 04:10:58 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 04:10:58 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 04:10:58 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 04:10:58 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 04:10:58 localhost python3[64265]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:10:58 localhost systemd[1]: Reloading. Oct 5 04:10:58 localhost systemd-rc-local-generator[64290]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:58 localhost systemd-sysv-generator[64294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:10:59 localhost systemd[1]: Starting collectd container... Oct 5 04:10:59 localhost systemd[1]: Started collectd container. Oct 5 04:10:59 localhost sshd[64333]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:10:59 localhost python3[64332]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:10:59 localhost systemd[1]: Reloading. Oct 5 04:10:59 localhost sshd[64337]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:10:59 localhost systemd-rc-local-generator[64363]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:10:59 localhost systemd-sysv-generator[64369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:10:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:00 localhost systemd[1]: Starting iscsid container... Oct 5 04:11:00 localhost systemd[1]: Started iscsid container. Oct 5 04:11:00 localhost python3[64403]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:00 localhost systemd[1]: Reloading. Oct 5 04:11:01 localhost systemd-rc-local-generator[64426]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:01 localhost systemd-sysv-generator[64430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:01 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Oct 5 04:11:01 localhost systemd[1]: Started nova_virtlogd_wrapper container. Oct 5 04:11:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:11:01 localhost systemd[1]: tmp-crun.KBsdeB.mount: Deactivated successfully. Oct 5 04:11:01 localhost podman[64469]: 2025-10-05 08:11:01.76283679 +0000 UTC m=+0.094902182 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step1, release=1, tcib_managed=true) Oct 5 04:11:01 localhost python3[64470]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:01 localhost podman[64469]: 2025-10-05 08:11:01.970276435 +0000 UTC m=+0.302341837 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, distribution-scope=public, name=rhosp17/openstack-qdrouterd) Oct 5 04:11:01 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:11:02 localhost systemd[1]: Reloading. Oct 5 04:11:03 localhost systemd-rc-local-generator[64525]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:03 localhost systemd-sysv-generator[64528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:03 localhost systemd[1]: Starting nova_virtnodedevd container... Oct 5 04:11:03 localhost tripleo-start-podman-container[64538]: Creating additional drop-in dependency for "nova_virtnodedevd" (2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23) Oct 5 04:11:03 localhost systemd[1]: Reloading. Oct 5 04:11:03 localhost systemd-sysv-generator[64598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:03 localhost systemd-rc-local-generator[64595]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:03 localhost systemd[1]: Started nova_virtnodedevd container. Oct 5 04:11:04 localhost python3[64622]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:04 localhost systemd[1]: Reloading. Oct 5 04:11:04 localhost systemd-sysv-generator[64652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:04 localhost systemd-rc-local-generator[64646]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:04 localhost systemd[1]: Starting nova_virtproxyd container... Oct 5 04:11:04 localhost tripleo-start-podman-container[64661]: Creating additional drop-in dependency for "nova_virtproxyd" (9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413) Oct 5 04:11:04 localhost systemd[1]: Reloading. Oct 5 04:11:05 localhost systemd-rc-local-generator[64718]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:05 localhost systemd-sysv-generator[64723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:05 localhost systemd[1]: Started nova_virtproxyd container. Oct 5 04:11:05 localhost python3[64745]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:05 localhost systemd[1]: Reloading. Oct 5 04:11:05 localhost systemd-rc-local-generator[64770]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:05 localhost systemd-sysv-generator[64774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:06 localhost systemd[1]: Starting nova_virtqemud container... Oct 5 04:11:06 localhost tripleo-start-podman-container[64784]: Creating additional drop-in dependency for "nova_virtqemud" (e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c) Oct 5 04:11:06 localhost systemd[1]: Reloading. Oct 5 04:11:06 localhost systemd-sysv-generator[64845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:06 localhost systemd-rc-local-generator[64839]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:06 localhost systemd[1]: Started nova_virtqemud container. Oct 5 04:11:07 localhost python3[64866]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:07 localhost systemd[1]: Reloading. Oct 5 04:11:07 localhost systemd-rc-local-generator[64895]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:07 localhost systemd-sysv-generator[64899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:07 localhost systemd[1]: Starting nova_virtsecretd container... Oct 5 04:11:07 localhost tripleo-start-podman-container[64906]: Creating additional drop-in dependency for "nova_virtsecretd" (0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa) Oct 5 04:11:07 localhost systemd[1]: Reloading. Oct 5 04:11:07 localhost systemd-sysv-generator[64967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:07 localhost systemd-rc-local-generator[64963]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:07 localhost systemd[1]: Started nova_virtsecretd container. Oct 5 04:11:08 localhost python3[64991]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:08 localhost systemd[1]: Reloading. Oct 5 04:11:08 localhost systemd-rc-local-generator[65015]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:08 localhost systemd-sysv-generator[65021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:09 localhost systemd[1]: Starting nova_virtstoraged container... Oct 5 04:11:09 localhost tripleo-start-podman-container[65031]: Creating additional drop-in dependency for "nova_virtstoraged" (7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028) Oct 5 04:11:09 localhost systemd[1]: Reloading. Oct 5 04:11:09 localhost systemd-rc-local-generator[65090]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:09 localhost systemd-sysv-generator[65094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:09 localhost systemd[1]: Started nova_virtstoraged container. Oct 5 04:11:10 localhost python3[65115]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:11:10 localhost systemd[1]: Reloading. Oct 5 04:11:10 localhost systemd-sysv-generator[65146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:11:10 localhost systemd-rc-local-generator[65141]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:11:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:11:10 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:10 localhost systemd[1]: Started libcrun container. Oct 5 04:11:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:10 localhost podman[65154]: 2025-10-05 08:11:10.712133402 +0000 UTC m=+0.140611966 container init 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.9, container_name=rsyslog, name=rhosp17/openstack-rsyslog, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:11:10 localhost podman[65154]: 2025-10-05 08:11:10.721915139 +0000 UTC m=+0.150393693 container start 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9) Oct 5 04:11:10 localhost podman[65154]: rsyslog Oct 5 04:11:10 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:10 localhost systemd[1]: libpod-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:11:10 localhost podman[65190]: 2025-10-05 08:11:10.879720434 +0000 UTC m=+0.044190281 container died 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, build-date=2025-07-21T12:58:40, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 5 04:11:10 localhost podman[65190]: 2025-10-05 08:11:10.905646655 +0000 UTC m=+0.070116462 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, com.redhat.component=openstack-rsyslog-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.33.12, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, container_name=rsyslog, tcib_managed=true) Oct 5 04:11:10 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:10 localhost podman[65203]: 2025-10-05 08:11:10.989558385 +0000 UTC m=+0.050599798 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, vcs-type=git, release=1, distribution-scope=public, config_id=tripleo_step3, build-date=2025-07-21T12:58:40, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-rsyslog-container) Oct 5 04:11:10 localhost podman[65203]: rsyslog Oct 5 04:11:10 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:11 localhost python3[65231]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:11:11 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Oct 5 04:11:11 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:11 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:11 localhost systemd[1]: Started libcrun container. Oct 5 04:11:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:11 localhost podman[65232]: 2025-10-05 08:11:11.461096678 +0000 UTC m=+0.130416796 container init 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, container_name=rsyslog, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Oct 5 04:11:11 localhost podman[65232]: 2025-10-05 08:11:11.471248377 +0000 UTC m=+0.140568505 container start 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, container_name=rsyslog, release=1, architecture=x86_64) Oct 5 04:11:11 localhost podman[65232]: rsyslog Oct 5 04:11:11 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:11 localhost systemd[1]: libpod-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:11:11 localhost podman[65281]: 2025-10-05 08:11:11.642205492 +0000 UTC m=+0.054364742 container died 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, release=1, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-rsyslog, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=rsyslog) Oct 5 04:11:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af-userdata-shm.mount: Deactivated successfully. Oct 5 04:11:11 localhost systemd[1]: var-lib-containers-storage-overlay-55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e-merged.mount: Deactivated successfully. Oct 5 04:11:11 localhost podman[65281]: 2025-10-05 08:11:11.669938821 +0000 UTC m=+0.082097981 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-rsyslog-container, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, tcib_managed=true, distribution-scope=public, version=17.1.9, release=1, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=rsyslog, vcs-type=git) Oct 5 04:11:11 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:11 localhost podman[65317]: 2025-10-05 08:11:11.746250493 +0000 UTC m=+0.044022437 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, version=17.1.9, name=rhosp17/openstack-rsyslog, vcs-type=git) Oct 5 04:11:11 localhost podman[65317]: rsyslog Oct 5 04:11:11 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:12 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Oct 5 04:11:12 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:12 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:12 localhost systemd[1]: Started libcrun container. Oct 5 04:11:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:12 localhost podman[65372]: 2025-10-05 08:11:12.218137846 +0000 UTC m=+0.125136571 container init 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:40, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, tcib_managed=true, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 04:11:12 localhost podman[65372]: 2025-10-05 08:11:12.227596536 +0000 UTC m=+0.134595311 container start 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, container_name=rsyslog, release=1, build-date=2025-07-21T12:58:40, name=rhosp17/openstack-rsyslog, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public) Oct 5 04:11:12 localhost podman[65372]: rsyslog Oct 5 04:11:12 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:12 localhost systemd[1]: libpod-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:11:12 localhost podman[65396]: 2025-10-05 08:11:12.336127859 +0000 UTC m=+0.029724285 container died 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, container_name=rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, version=17.1.9, name=rhosp17/openstack-rsyslog, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64) Oct 5 04:11:12 localhost podman[65396]: 2025-10-05 08:11:12.357428883 +0000 UTC m=+0.051025239 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=rsyslog, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, build-date=2025-07-21T12:58:40, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64, version=17.1.9) Oct 5 04:11:12 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:12 localhost podman[65424]: 2025-10-05 08:11:12.466132403 +0000 UTC m=+0.038495806 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 04:11:12 localhost podman[65424]: rsyslog Oct 5 04:11:12 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:12 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Oct 5 04:11:12 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:12 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:12 localhost python3[65451]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005471152 step=3 update_config_hash_only=False Oct 5 04:11:12 localhost systemd[1]: Started libcrun container. Oct 5 04:11:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:12 localhost podman[65452]: 2025-10-05 08:11:12.801025571 +0000 UTC m=+0.132942944 container init 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=rsyslog, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rsyslog, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=) Oct 5 04:11:12 localhost podman[65452]: 2025-10-05 08:11:12.811054316 +0000 UTC m=+0.142971689 container start 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, architecture=x86_64, name=rhosp17/openstack-rsyslog, tcib_managed=true, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.expose-services=) Oct 5 04:11:12 localhost podman[65452]: rsyslog Oct 5 04:11:12 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:12 localhost systemd[1]: libpod-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:11:12 localhost podman[65474]: 2025-10-05 08:11:12.938660543 +0000 UTC m=+0.045059366 container died 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:40, container_name=rsyslog, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, distribution-scope=public, name=rhosp17/openstack-rsyslog, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1) Oct 5 04:11:12 localhost podman[65474]: 2025-10-05 08:11:12.959169935 +0000 UTC m=+0.065568728 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:11:12 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:13 localhost podman[65488]: 2025-10-05 08:11:13.057580242 +0000 UTC m=+0.059559373 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, release=1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T12:58:40) Oct 5 04:11:13 localhost podman[65488]: rsyslog Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Oct 5 04:11:13 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:13 localhost systemd[1]: Starting rsyslog container... Oct 5 04:11:13 localhost systemd[1]: Started libcrun container. Oct 5 04:11:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Oct 5 04:11:13 localhost podman[65515]: 2025-10-05 08:11:13.443689945 +0000 UTC m=+0.123559898 container init 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=rsyslog, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167) Oct 5 04:11:13 localhost podman[65515]: 2025-10-05 08:11:13.452422464 +0000 UTC m=+0.132292407 container start 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, build-date=2025-07-21T12:58:40, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, container_name=rsyslog, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, distribution-scope=public, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=) Oct 5 04:11:13 localhost podman[65515]: rsyslog Oct 5 04:11:13 localhost systemd[1]: Started rsyslog container. Oct 5 04:11:13 localhost python3[65514]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:11:13 localhost systemd[1]: libpod-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af.scope: Deactivated successfully. Oct 5 04:11:13 localhost podman[65537]: 2025-10-05 08:11:13.592856523 +0000 UTC m=+0.037749157 container died 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-07-21T12:58:40, maintainer=OpenStack TripleO Team, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, container_name=rsyslog, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:11:13 localhost podman[65537]: 2025-10-05 08:11:13.614384103 +0000 UTC m=+0.059276697 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20250721.1, build-date=2025-07-21T12:58:40, container_name=rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:11:13 localhost systemd[1]: var-lib-containers-storage-overlay-55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e-merged.mount: Deactivated successfully. Oct 5 04:11:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af-userdata-shm.mount: Deactivated successfully. Oct 5 04:11:13 localhost podman[65563]: 2025-10-05 08:11:13.679728813 +0000 UTC m=+0.040868071 container cleanup 6239ec21dfb956ab54ab012ea1eee00814ea93a7c8e5da88dc84dd04ddfc49af (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=38a223d7b691af709e0a5f628409462e34eea167, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1, config_id=tripleo_step3, build-date=2025-07-21T12:58:40, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=rsyslog, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rsyslog/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.9, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c451d5e94e858df36b636f2835a46cda'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:11:13 localhost podman[65563]: rsyslog Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Oct 5 04:11:13 localhost systemd[1]: Stopped rsyslog container. Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Oct 5 04:11:13 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Oct 5 04:11:13 localhost systemd[1]: Failed to start rsyslog container. Oct 5 04:11:13 localhost python3[65578]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:11:15 localhost podman[65579]: 2025-10-05 08:11:15.910883662 +0000 UTC m=+0.082661476 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:11:15 localhost podman[65579]: 2025-10-05 08:11:15.920092595 +0000 UTC m=+0.091870429 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, container_name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, release=2, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:11:15 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:11:16 localhost systemd[1]: tmp-crun.msMgpw.mount: Deactivated successfully. Oct 5 04:11:16 localhost podman[65599]: 2025-10-05 08:11:16.913899202 +0000 UTC m=+0.083903141 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Oct 5 04:11:16 localhost podman[65599]: 2025-10-05 08:11:16.949359274 +0000 UTC m=+0.119363233 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.9, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:11:16 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:11:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:11:32 localhost systemd[1]: tmp-crun.yBYMsX.mount: Deactivated successfully. Oct 5 04:11:32 localhost podman[65618]: 2025-10-05 08:11:32.917273533 +0000 UTC m=+0.089601327 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Oct 5 04:11:33 localhost podman[65618]: 2025-10-05 08:11:33.105162103 +0000 UTC m=+0.277489867 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, distribution-scope=public, vcs-type=git, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:11:33 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:11:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:11:46 localhost systemd[1]: tmp-crun.D6pAsj.mount: Deactivated successfully. Oct 5 04:11:46 localhost podman[65725]: 2025-10-05 08:11:46.923934673 +0000 UTC m=+0.084298261 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:11:46 localhost podman[65725]: 2025-10-05 08:11:46.93401354 +0000 UTC m=+0.094377128 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, release=2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, version=17.1.9, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:11:46 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:11:47 localhost podman[65745]: 2025-10-05 08:11:47.908859037 +0000 UTC m=+0.081452075 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12) Oct 5 04:11:47 localhost podman[65745]: 2025-10-05 08:11:47.915570071 +0000 UTC m=+0.088163139 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=) Oct 5 04:11:47 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:12:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:12:03 localhost podman[65765]: 2025-10-05 08:12:03.898477892 +0000 UTC m=+0.069604299 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, release=1) Oct 5 04:12:04 localhost podman[65765]: 2025-10-05 08:12:04.083938285 +0000 UTC m=+0.255064692 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr) Oct 5 04:12:04 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:12:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:12:17 localhost systemd[1]: tmp-crun.5hOAdp.mount: Deactivated successfully. Oct 5 04:12:17 localhost podman[65795]: 2025-10-05 08:12:17.928557493 +0000 UTC m=+0.085685789 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:12:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:12:17 localhost podman[65795]: 2025-10-05 08:12:17.968139378 +0000 UTC m=+0.125267674 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, release=2, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:12:17 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:12:18 localhost systemd[1]: tmp-crun.4kNPiJ.mount: Deactivated successfully. Oct 5 04:12:18 localhost podman[65813]: 2025-10-05 08:12:18.054291999 +0000 UTC m=+0.087099888 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:12:18 localhost podman[65813]: 2025-10-05 08:12:18.062981688 +0000 UTC m=+0.095789557 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, com.redhat.component=openstack-iscsid-container, container_name=iscsid, release=1, architecture=x86_64, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:12:18 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:12:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:12:34 localhost podman[65848]: 2025-10-05 08:12:34.69199966 +0000 UTC m=+0.081528698 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:12:34 localhost podman[65848]: 2025-10-05 08:12:34.869927965 +0000 UTC m=+0.259456993 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible) Oct 5 04:12:34 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:12:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:12:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:12:48 localhost systemd[1]: tmp-crun.2oBDW8.mount: Deactivated successfully. Oct 5 04:12:48 localhost podman[65939]: 2025-10-05 08:12:48.922917619 +0000 UTC m=+0.093719627 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, container_name=iscsid, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:12:48 localhost podman[65940]: 2025-10-05 08:12:48.960689776 +0000 UTC m=+0.129042718 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:04:03, release=2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 5 04:12:48 localhost podman[65940]: 2025-10-05 08:12:48.971061779 +0000 UTC m=+0.139414731 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12) Oct 5 04:12:48 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:12:48 localhost podman[65939]: 2025-10-05 08:12:48.985434409 +0000 UTC m=+0.156236427 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:12:49 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:13:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:13:05 localhost podman[65980]: 2025-10-05 08:13:05.912213497 +0000 UTC m=+0.081040374 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, release=1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Oct 5 04:13:06 localhost podman[65980]: 2025-10-05 08:13:06.121207467 +0000 UTC m=+0.290034324 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=) Oct 5 04:13:06 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:13:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:13:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:13:19 localhost podman[66010]: 2025-10-05 08:13:19.911897542 +0000 UTC m=+0.076316614 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 5 04:13:19 localhost podman[66010]: 2025-10-05 08:13:19.924104375 +0000 UTC m=+0.088523497 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-type=git) Oct 5 04:13:19 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:13:20 localhost podman[66009]: 2025-10-05 08:13:20.010128423 +0000 UTC m=+0.177756122 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=) Oct 5 04:13:20 localhost podman[66009]: 2025-10-05 08:13:20.045990567 +0000 UTC m=+0.213618256 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=) Oct 5 04:13:20 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:13:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:13:36 localhost podman[66062]: 2025-10-05 08:13:36.328700651 +0000 UTC m=+0.092668069 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, build-date=2025-07-21T13:07:59) Oct 5 04:13:36 localhost podman[66062]: 2025-10-05 08:13:36.539404768 +0000 UTC m=+0.303372176 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, distribution-scope=public, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:13:36 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:13:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:13:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:13:50 localhost podman[66153]: 2025-10-05 08:13:50.913650443 +0000 UTC m=+0.083083109 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 04:13:50 localhost podman[66153]: 2025-10-05 08:13:50.923486601 +0000 UTC m=+0.092919357 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container) Oct 5 04:13:50 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:13:51 localhost podman[66154]: 2025-10-05 08:13:51.010098044 +0000 UTC m=+0.178362658 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2) Oct 5 04:13:51 localhost podman[66154]: 2025-10-05 08:13:51.024141617 +0000 UTC m=+0.192406291 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, release=2, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc.) Oct 5 04:13:51 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:14:06 localhost podman[66192]: 2025-10-05 08:14:06.916618824 +0000 UTC m=+0.087545631 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:14:07 localhost podman[66192]: 2025-10-05 08:14:07.134239188 +0000 UTC m=+0.305165995 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, config_id=tripleo_step1) Oct 5 04:14:07 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:14:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:14:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:14:21 localhost systemd[1]: tmp-crun.cNj865.mount: Deactivated successfully. Oct 5 04:14:21 localhost podman[66221]: 2025-10-05 08:14:21.913286355 +0000 UTC m=+0.085211047 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:14:21 localhost podman[66221]: 2025-10-05 08:14:21.926090463 +0000 UTC m=+0.098015115 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, release=1) Oct 5 04:14:21 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:14:22 localhost podman[66222]: 2025-10-05 08:14:22.018975518 +0000 UTC m=+0.186980573 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-collectd-container, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=) Oct 5 04:14:22 localhost podman[66222]: 2025-10-05 08:14:22.030293486 +0000 UTC m=+0.198298531 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, build-date=2025-07-21T13:04:03, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=2) Oct 5 04:14:22 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:14:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:14:37 localhost systemd[1]: tmp-crun.e8U7wc.mount: Deactivated successfully. Oct 5 04:14:37 localhost podman[66260]: 2025-10-05 08:14:37.918013254 +0000 UTC m=+0.083107010 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:14:38 localhost podman[66260]: 2025-10-05 08:14:38.11322276 +0000 UTC m=+0.278316526 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 5 04:14:38 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:14:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:14:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:14:52 localhost podman[66418]: 2025-10-05 08:14:52.90527982 +0000 UTC m=+0.072374868 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, container_name=collectd, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:14:52 localhost podman[66418]: 2025-10-05 08:14:52.913170595 +0000 UTC m=+0.080265643 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03) Oct 5 04:14:52 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:14:52 localhost systemd[1]: tmp-crun.7EMK4G.mount: Deactivated successfully. Oct 5 04:14:52 localhost podman[66417]: 2025-10-05 08:14:52.962877766 +0000 UTC m=+0.131243138 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 04:14:52 localhost podman[66417]: 2025-10-05 08:14:52.999185203 +0000 UTC m=+0.167550635 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, tcib_managed=true, container_name=iscsid) Oct 5 04:14:53 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:15:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:15:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4363 writes, 20K keys, 4363 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4363 writes, 480 syncs, 9.09 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 348 writes, 865 keys, 348 commit groups, 1.0 writes per commit group, ingest: 0.63 MB, 0.00 MB/s#012Interval WAL: 348 writes, 170 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:15:02 localhost python3[66497]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:03 localhost python3[66542]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652102.4309995-108623-260899372694321/source _original_basename=tmpho7_czo3 follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:04 localhost python3[66604]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:04 localhost python3[66647]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652104.2437866-108795-27165987117854/source _original_basename=tmp4t1my6kg follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:15:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5237 writes, 23K keys, 5237 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5237 writes, 572 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 401 writes, 849 keys, 401 commit groups, 1.0 writes per commit group, ingest: 0.57 MB, 0.00 MB/s#012Interval WAL: 401 writes, 196 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:15:05 localhost python3[66709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:05 localhost python3[66752]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652105.1733475-108854-55149542789719/source _original_basename=tmpflh6u31h follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:06 localhost python3[66814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:06 localhost python3[66857]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652106.1256297-108912-244771564653216/source _original_basename=tmp4yq3cmtk follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:07 localhost python3[66887]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 04:15:07 localhost systemd[1]: Reloading. Oct 5 04:15:07 localhost systemd-sysv-generator[66917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:07 localhost systemd-rc-local-generator[66913]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:15:08 localhost systemd[1]: Reloading. Oct 5 04:15:08 localhost podman[66925]: 2025-10-05 08:15:08.219923524 +0000 UTC m=+0.077087796 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, config_id=tripleo_step1, io.openshift.expose-services=, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:15:08 localhost systemd-rc-local-generator[66973]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:08 localhost systemd-sysv-generator[66977]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:08 localhost podman[66925]: 2025-10-05 08:15:08.415089008 +0000 UTC m=+0.272253240 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, version=17.1.9) Oct 5 04:15:08 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:15:08 localhost python3[67008]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:08 localhost systemd[1]: Reloading. Oct 5 04:15:08 localhost systemd-rc-local-generator[67033]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:08 localhost systemd-sysv-generator[67036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:09 localhost systemd[1]: Reloading. Oct 5 04:15:09 localhost systemd-rc-local-generator[67071]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:09 localhost systemd-sysv-generator[67077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:09 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Oct 5 04:15:09 localhost python3[67099]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 04:15:09 localhost systemd[1]: Reloading. Oct 5 04:15:10 localhost systemd-rc-local-generator[67119]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:10 localhost systemd-sysv-generator[67122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:10 localhost python3[67182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:11 localhost python3[67225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652110.4526443-109016-116616280848506/source _original_basename=tmpakdpm5uq follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:11 localhost python3[67255]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:11 localhost systemd[1]: Reloading. Oct 5 04:15:11 localhost systemd-rc-local-generator[67278]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:11 localhost systemd-sysv-generator[67281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:12 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Oct 5 04:15:12 localhost python3[67310]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:14 localhost ansible-async_wrapper.py[67482]: Invoked with 828349470369 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652113.6100156-109107-114344492200330/AnsiballZ_command.py _ Oct 5 04:15:14 localhost ansible-async_wrapper.py[67485]: Starting module and watcher Oct 5 04:15:14 localhost ansible-async_wrapper.py[67485]: Start watching 67486 (3600) Oct 5 04:15:14 localhost ansible-async_wrapper.py[67486]: Start module (67486) Oct 5 04:15:14 localhost ansible-async_wrapper.py[67482]: Return async_wrapper task started. Oct 5 04:15:14 localhost python3[67506]: ansible-ansible.legacy.async_status Invoked with jid=828349470369.67482 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:15:17 localhost puppet-user[67504]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:15:17 localhost puppet-user[67504]: (file: /etc/puppet/hiera.yaml) Oct 5 04:15:17 localhost puppet-user[67504]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:15:17 localhost puppet-user[67504]: (file & line not available) Oct 5 04:15:17 localhost puppet-user[67504]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:15:17 localhost puppet-user[67504]: (file & line not available) Oct 5 04:15:17 localhost puppet-user[67504]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:15:17 localhost puppet-user[67504]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[67504]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[67504]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[67504]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[67504]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[67504]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[67504]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[67504]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[67504]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[67504]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[67504]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[67504]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[67504]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[67504]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[67504]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:15:17 localhost puppet-user[67504]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:15:17 localhost puppet-user[67504]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:15:17 localhost puppet-user[67504]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:15:17 localhost puppet-user[67504]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.21 seconds Oct 5 04:15:19 localhost ansible-async_wrapper.py[67485]: 67486 still running (3600) Oct 5 04:15:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:15:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:15:23 localhost podman[67682]: 2025-10-05 08:15:23.931012332 +0000 UTC m=+0.088896407 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-iscsid-container, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.expose-services=) Oct 5 04:15:23 localhost podman[67682]: 2025-10-05 08:15:23.940799069 +0000 UTC m=+0.098683164 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, vendor=Red Hat, Inc., container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Oct 5 04:15:23 localhost podman[67683]: 2025-10-05 08:15:23.981829673 +0000 UTC m=+0.136303364 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, architecture=x86_64) Oct 5 04:15:23 localhost podman[67683]: 2025-10-05 08:15:23.995095404 +0000 UTC m=+0.149569105 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git) Oct 5 04:15:24 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:15:24 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:15:24 localhost ansible-async_wrapper.py[67485]: 67486 still running (3595) Oct 5 04:15:24 localhost python3[67746]: ansible-ansible.legacy.async_status Invoked with jid=828349470369.67482 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:15:25 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 04:15:25 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 04:15:25 localhost systemd[1]: Reloading. Oct 5 04:15:25 localhost systemd-sysv-generator[67824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:25 localhost systemd-rc-local-generator[67818]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:26 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 04:15:26 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 04:15:26 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 04:15:26 localhost systemd[1]: man-db-cache-update.service: Consumed 1.127s CPU time. Oct 5 04:15:26 localhost systemd[1]: run-rec7c89621d824925a95e29bcb8fb2e09.service: Deactivated successfully. Oct 5 04:15:27 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Oct 5 04:15:27 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}7b223a02de0edcd2a8e74227c0ca8fd3030b827c9943e5563275164eb61c13a8' Oct 5 04:15:27 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Oct 5 04:15:27 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Oct 5 04:15:27 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Oct 5 04:15:27 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Oct 5 04:15:29 localhost ansible-async_wrapper.py[67485]: 67486 still running (3590) Oct 5 04:15:32 localhost puppet-user[67504]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Oct 5 04:15:32 localhost systemd[1]: Reloading. Oct 5 04:15:32 localhost systemd-rc-local-generator[68877]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:32 localhost systemd-sysv-generator[68881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:33 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Oct 5 04:15:33 localhost snmpd[68888]: Can't find directory of RPM packages Oct 5 04:15:33 localhost snmpd[68888]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Oct 5 04:15:33 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Oct 5 04:15:33 localhost systemd[1]: Reloading. Oct 5 04:15:33 localhost systemd-sysv-generator[68917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:33 localhost systemd-rc-local-generator[68912]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:33 localhost systemd[1]: Reloading. Oct 5 04:15:33 localhost systemd-sysv-generator[68954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:33 localhost systemd-rc-local-generator[68950]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:33 localhost puppet-user[67504]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Oct 5 04:15:33 localhost puppet-user[67504]: Notice: Applied catalog in 16.04 seconds Oct 5 04:15:33 localhost puppet-user[67504]: Application: Oct 5 04:15:33 localhost puppet-user[67504]: Initial environment: production Oct 5 04:15:33 localhost puppet-user[67504]: Converged environment: production Oct 5 04:15:33 localhost puppet-user[67504]: Run mode: user Oct 5 04:15:33 localhost puppet-user[67504]: Changes: Oct 5 04:15:33 localhost puppet-user[67504]: Total: 8 Oct 5 04:15:33 localhost puppet-user[67504]: Events: Oct 5 04:15:33 localhost puppet-user[67504]: Success: 8 Oct 5 04:15:33 localhost puppet-user[67504]: Total: 8 Oct 5 04:15:33 localhost puppet-user[67504]: Resources: Oct 5 04:15:33 localhost puppet-user[67504]: Restarted: 1 Oct 5 04:15:33 localhost puppet-user[67504]: Changed: 8 Oct 5 04:15:33 localhost puppet-user[67504]: Out of sync: 8 Oct 5 04:15:33 localhost puppet-user[67504]: Total: 19 Oct 5 04:15:33 localhost puppet-user[67504]: Time: Oct 5 04:15:33 localhost puppet-user[67504]: Filebucket: 0.00 Oct 5 04:15:33 localhost puppet-user[67504]: Schedule: 0.00 Oct 5 04:15:33 localhost puppet-user[67504]: Augeas: 0.01 Oct 5 04:15:33 localhost puppet-user[67504]: File: 0.08 Oct 5 04:15:33 localhost puppet-user[67504]: Config retrieval: 0.26 Oct 5 04:15:33 localhost puppet-user[67504]: Service: 1.21 Oct 5 04:15:33 localhost puppet-user[67504]: Transaction evaluation: 16.02 Oct 5 04:15:33 localhost puppet-user[67504]: Catalog application: 16.04 Oct 5 04:15:33 localhost puppet-user[67504]: Last run: 1759652133 Oct 5 04:15:33 localhost puppet-user[67504]: Exec: 5.06 Oct 5 04:15:33 localhost puppet-user[67504]: Package: 9.49 Oct 5 04:15:33 localhost puppet-user[67504]: Total: 16.04 Oct 5 04:15:33 localhost puppet-user[67504]: Version: Oct 5 04:15:33 localhost puppet-user[67504]: Config: 1759652117 Oct 5 04:15:33 localhost puppet-user[67504]: Puppet: 7.10.0 Oct 5 04:15:33 localhost ansible-async_wrapper.py[67486]: Module complete (67486) Oct 5 04:15:34 localhost ansible-async_wrapper.py[67485]: Done in kid B. Oct 5 04:15:35 localhost python3[68976]: ansible-ansible.legacy.async_status Invoked with jid=828349470369.67482 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:15:35 localhost python3[68992]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:15:36 localhost python3[69008]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:36 localhost python3[69058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:36 localhost python3[69076]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp1hzwax19 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:15:37 localhost python3[69106]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:38 localhost python3[69209]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:15:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:15:38 localhost systemd[1]: tmp-crun.7ZaWlB.mount: Deactivated successfully. Oct 5 04:15:38 localhost podman[69213]: 2025-10-05 08:15:38.937319445 +0000 UTC m=+0.097778648 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, distribution-scope=public) Oct 5 04:15:39 localhost podman[69213]: 2025-10-05 08:15:39.114880191 +0000 UTC m=+0.275339324 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1) Oct 5 04:15:39 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:15:39 localhost python3[69257]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:40 localhost python3[69289]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:40 localhost python3[69339]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:40 localhost python3[69357]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:41 localhost python3[69419]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:41 localhost python3[69437]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:42 localhost python3[69499]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:42 localhost python3[69517]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:43 localhost python3[69579]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:43 localhost python3[69597]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:43 localhost python3[69627]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:43 localhost systemd[1]: Reloading. Oct 5 04:15:43 localhost systemd-rc-local-generator[69650]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:43 localhost systemd-sysv-generator[69653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:44 localhost python3[69713]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:44 localhost python3[69731]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:45 localhost python3[69793]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:15:45 localhost python3[69811]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:46 localhost python3[69841]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:15:46 localhost systemd[1]: Reloading. Oct 5 04:15:46 localhost systemd-rc-local-generator[69863]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:15:46 localhost systemd-sysv-generator[69867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:15:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:15:46 localhost systemd[1]: Starting Create netns directory... Oct 5 04:15:46 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:15:46 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:15:46 localhost systemd[1]: Finished Create netns directory. Oct 5 04:15:47 localhost python3[69928]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:15:48 localhost podman[70125]: Oct 5 04:15:48 localhost podman[70125]: 2025-10-05 08:15:48.876147405 +0000 UTC m=+0.075746509 container create d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_pare, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, com.redhat.component=rhceph-container, version=7, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 04:15:48 localhost systemd[1]: Started libpod-conmon-d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17.scope. Oct 5 04:15:48 localhost systemd[1]: Started libcrun container. Oct 5 04:15:48 localhost podman[70125]: 2025-10-05 08:15:48.847210578 +0000 UTC m=+0.046809702 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:15:48 localhost podman[70125]: 2025-10-05 08:15:48.949322784 +0000 UTC m=+0.148921888 container init d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_pare, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, GIT_BRANCH=main, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 04:15:48 localhost podman[70125]: 2025-10-05 08:15:48.960738594 +0000 UTC m=+0.160337698 container start d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_pare, vcs-type=git, release=553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 04:15:48 localhost podman[70125]: 2025-10-05 08:15:48.961249128 +0000 UTC m=+0.160848232 container attach d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_pare, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Oct 5 04:15:48 localhost quirky_pare[70140]: 167 167 Oct 5 04:15:48 localhost systemd[1]: libpod-d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17.scope: Deactivated successfully. Oct 5 04:15:48 localhost podman[70125]: 2025-10-05 08:15:48.966058239 +0000 UTC m=+0.165657393 container died d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_pare, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph) Oct 5 04:15:49 localhost podman[70145]: 2025-10-05 08:15:49.057576965 +0000 UTC m=+0.080929149 container remove d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_pare, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True) Oct 5 04:15:49 localhost systemd[1]: libpod-conmon-d08ea88a3a5cf0f2027fddd908f8222a16e01036b35b573f1786a25d28916c17.scope: Deactivated successfully. Oct 5 04:15:49 localhost podman[70165]: Oct 5 04:15:49 localhost podman[70165]: 2025-10-05 08:15:49.267580013 +0000 UTC m=+0.074502545 container create 1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_sammet, vendor=Red Hat, Inc., name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80.scope. Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3e36873754de2f75f0273b8ad3edd07407833f51500c218d0af79c4067c422/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3e36873754de2f75f0273b8ad3edd07407833f51500c218d0af79c4067c422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3e36873754de2f75f0273b8ad3edd07407833f51500c218d0af79c4067c422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost podman[70165]: 2025-10-05 08:15:49.335539241 +0000 UTC m=+0.142461763 container init 1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_sammet, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, release=553, maintainer=Guillaume Abrioux , version=7, build-date=2025-09-24T08:57:55, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container) Oct 5 04:15:49 localhost podman[70165]: 2025-10-05 08:15:49.238872813 +0000 UTC m=+0.045795415 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:15:49 localhost podman[70165]: 2025-10-05 08:15:49.342553391 +0000 UTC m=+0.149475923 container start 1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_sammet, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 04:15:49 localhost podman[70165]: 2025-10-05 08:15:49.342823678 +0000 UTC m=+0.149746210 container attach 1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_sammet, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, release=553, vcs-type=git, version=7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph) Oct 5 04:15:49 localhost python3[70199]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:15:49 localhost podman[70370]: 2025-10-05 08:15:49.836253159 +0000 UTC m=+0.076074839 container create 04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, distribution-scope=public, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_libvirt_init_secret, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:15:49 localhost podman[70388]: 2025-10-05 08:15:49.847307979 +0000 UTC m=+0.074518686 container create aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:15:49 localhost podman[70403]: 2025-10-05 08:15:49.867107637 +0000 UTC m=+0.085770091 container create 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339.scope. Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.scope. Oct 5 04:15:49 localhost systemd[1]: var-lib-containers-storage-overlay-472a11a629524cf8fb1d922f654dbfd2e6718d52ac483301ee69d320f4ae939c-merged.mount: Deactivated successfully. Oct 5 04:15:49 localhost podman[70404]: 2025-10-05 08:15:49.892288171 +0000 UTC m=+0.105635491 container create c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, release=1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=configure_cms_options, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost podman[70370]: 2025-10-05 08:15:49.800899458 +0000 UTC m=+0.040721148 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.scope. Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd5d7f208e627ed078801541a11c92d30dfbffb1c7200a7e88292fbfc56b82d/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost podman[70388]: 2025-10-05 08:15:49.80244263 +0000 UTC m=+0.029653337 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 04:15:49 localhost podman[70403]: 2025-10-05 08:15:49.807947279 +0000 UTC m=+0.026609733 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 04:15:49 localhost podman[70382]: 2025-10-05 08:15:49.812788961 +0000 UTC m=+0.039884605 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 5 04:15:49 localhost podman[70370]: 2025-10-05 08:15:49.911244487 +0000 UTC m=+0.151066167 container init 04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=nova_libvirt_init_secret, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, config_id=tripleo_step4, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, release=2, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible) Oct 5 04:15:49 localhost systemd[1]: tmp-crun.ZRmX50.mount: Deactivated successfully. Oct 5 04:15:49 localhost podman[70370]: 2025-10-05 08:15:49.917120406 +0000 UTC m=+0.156942086 container start 04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=nova_libvirt_init_secret, distribution-scope=public, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0) Oct 5 04:15:49 localhost podman[70370]: 2025-10-05 08:15:49.918199416 +0000 UTC m=+0.158021106 container attach 04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:56:59, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:15:49 localhost systemd[1]: Started libpod-conmon-c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec.scope. Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost podman[70404]: 2025-10-05 08:15:49.827912002 +0000 UTC m=+0.041259302 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 04:15:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55b66b4cc27e216ee661f88e3740f080132b0ec881f50e70b03e2853c0d8b80/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:49 localhost systemd[1]: Started libcrun container. Oct 5 04:15:49 localhost podman[70382]: 2025-10-05 08:15:49.940445421 +0000 UTC m=+0.167541045 container create 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:15:49 localhost podman[70404]: 2025-10-05 08:15:49.943182885 +0000 UTC m=+0.156530185 container init c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=configure_cms_options, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public) Oct 5 04:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:15:49 localhost podman[70403]: 2025-10-05 08:15:49.948048327 +0000 UTC m=+0.166710791 container init 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, release=1, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:15:49 localhost podman[70404]: 2025-10-05 08:15:49.950885144 +0000 UTC m=+0.164232444 container start c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=configure_cms_options, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ovn-controller, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:49 localhost podman[70404]: 2025-10-05 08:15:49.951054229 +0000 UTC m=+0.164401519 container attach c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, container_name=configure_cms_options, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Oct 5 04:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:15:49 localhost podman[70403]: 2025-10-05 08:15:49.971503865 +0000 UTC m=+0.190166309 container start 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, container_name=logrotate_crond, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12) Oct 5 04:15:49 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Oct 5 04:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:15:49 localhost podman[70388]: 2025-10-05 08:15:49.981568908 +0000 UTC m=+0.208779615 container init aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:15:50 localhost systemd[1]: Started libpod-conmon-528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.scope. Oct 5 04:15:50 localhost systemd[1]: libpod-04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339.scope: Deactivated successfully. Oct 5 04:15:50 localhost podman[70370]: 2025-10-05 08:15:50.032888463 +0000 UTC m=+0.272710153 container died 04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, managed_by=tripleo_ansible, release=2, vcs-type=git, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_libvirt_init_secret, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 04:15:50 localhost podman[71218]: 2025-10-05 08:15:50.038166816 +0000 UTC m=+0.061247915 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:15:50 localhost systemd[1]: Started libcrun container. Oct 5 04:15:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbb5021f030002945ce2fa60285c1586ad519c4dce8fbf294a1ab2597d3a339/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:50 localhost ovs-vsctl[71513]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:15:50 localhost podman[70382]: 2025-10-05 08:15:50.066714853 +0000 UTC m=+0.293810477 container init 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:15:50 localhost systemd[1]: libpod-c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec.scope: Deactivated successfully. Oct 5 04:15:50 localhost podman[70404]: 2025-10-05 08:15:50.069276622 +0000 UTC m=+0.282623922 container died c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, release=1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:15:50 localhost podman[70382]: 2025-10-05 08:15:50.087198629 +0000 UTC m=+0.314294263 container start 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9) Oct 5 04:15:50 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=7ae8f92d3eaef9724f650e9e8c537f24 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Oct 5 04:15:50 localhost podman[70388]: 2025-10-05 08:15:50.106227526 +0000 UTC m=+0.333438233 container start aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, release=1) Oct 5 04:15:50 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=7ae8f92d3eaef9724f650e9e8c537f24 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Oct 5 04:15:50 localhost podman[71328]: 2025-10-05 08:15:50.133852787 +0000 UTC m=+0.127857706 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:15:50 localhost festive_sammet[70195]: [ Oct 5 04:15:50 localhost festive_sammet[70195]: { Oct 5 04:15:50 localhost festive_sammet[70195]: "available": false, Oct 5 04:15:50 localhost festive_sammet[70195]: "ceph_device": false, Oct 5 04:15:50 localhost festive_sammet[70195]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 04:15:50 localhost festive_sammet[70195]: "lsm_data": {}, Oct 5 04:15:50 localhost festive_sammet[70195]: "lvs": [], Oct 5 04:15:50 localhost festive_sammet[70195]: "path": "/dev/sr0", Oct 5 04:15:50 localhost festive_sammet[70195]: "rejected_reasons": [ Oct 5 04:15:50 localhost festive_sammet[70195]: "Has a FileSystem", Oct 5 04:15:50 localhost festive_sammet[70195]: "Insufficient space (<5GB)" Oct 5 04:15:50 localhost festive_sammet[70195]: ], Oct 5 04:15:50 localhost festive_sammet[70195]: "sys_api": { Oct 5 04:15:50 localhost festive_sammet[70195]: "actuators": null, Oct 5 04:15:50 localhost festive_sammet[70195]: "device_nodes": "sr0", Oct 5 04:15:50 localhost festive_sammet[70195]: "human_readable_size": "482.00 KB", Oct 5 04:15:50 localhost festive_sammet[70195]: "id_bus": "ata", Oct 5 04:15:50 localhost festive_sammet[70195]: "model": "QEMU DVD-ROM", Oct 5 04:15:50 localhost festive_sammet[70195]: "nr_requests": "2", Oct 5 04:15:50 localhost festive_sammet[70195]: "partitions": {}, Oct 5 04:15:50 localhost festive_sammet[70195]: "path": "/dev/sr0", Oct 5 04:15:50 localhost festive_sammet[70195]: "removable": "1", Oct 5 04:15:50 localhost festive_sammet[70195]: "rev": "2.5+", Oct 5 04:15:50 localhost festive_sammet[70195]: "ro": "0", Oct 5 04:15:50 localhost festive_sammet[70195]: "rotational": "1", Oct 5 04:15:50 localhost festive_sammet[70195]: "sas_address": "", Oct 5 04:15:50 localhost festive_sammet[70195]: "sas_device_handle": "", Oct 5 04:15:50 localhost festive_sammet[70195]: "scheduler_mode": "mq-deadline", Oct 5 04:15:50 localhost festive_sammet[70195]: "sectors": 0, Oct 5 04:15:50 localhost festive_sammet[70195]: "sectorsize": "2048", Oct 5 04:15:50 localhost festive_sammet[70195]: "size": 493568.0, Oct 5 04:15:50 localhost festive_sammet[70195]: "support_discard": "0", Oct 5 04:15:50 localhost festive_sammet[70195]: "type": "disk", Oct 5 04:15:50 localhost festive_sammet[70195]: "vendor": "QEMU" Oct 5 04:15:50 localhost festive_sammet[70195]: } Oct 5 04:15:50 localhost festive_sammet[70195]: } Oct 5 04:15:50 localhost festive_sammet[70195]: ] Oct 5 04:15:50 localhost podman[71328]: 2025-10-05 08:15:50.197172298 +0000 UTC m=+0.191177237 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:15:50 localhost podman[71328]: unhealthy Oct 5 04:15:50 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:50 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 04:15:50 localhost systemd[1]: libpod-1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80.scope: Deactivated successfully. Oct 5 04:15:50 localhost podman[70165]: 2025-10-05 08:15:50.223382831 +0000 UTC m=+1.030305363 container died 1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_sammet, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, release=553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7) Oct 5 04:15:50 localhost podman[71658]: 2025-10-05 08:15:50.257737134 +0000 UTC m=+0.174121903 container cleanup c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=configure_cms_options, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:15:50 localhost systemd[1]: libpod-conmon-c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec.scope: Deactivated successfully. Oct 5 04:15:50 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Oct 5 04:15:50 localhost podman[71700]: 2025-10-05 08:15:50.309284305 +0000 UTC m=+0.219384603 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:15:50 localhost podman[71218]: 2025-10-05 08:15:50.32492562 +0000 UTC m=+0.348006699 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team) Oct 5 04:15:50 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:15:50 localhost podman[71700]: 2025-10-05 08:15:50.366605273 +0000 UTC m=+0.276705571 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12) Oct 5 04:15:50 localhost podman[71700]: unhealthy Oct 5 04:15:50 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:50 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Failed with result 'exit-code'. Oct 5 04:15:50 localhost podman[71466]: 2025-10-05 08:15:50.449708271 +0000 UTC m=+0.403999631 container cleanup 04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.buildah.version=1.33.12, io.openshift.expose-services=, release=2, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, managed_by=tripleo_ansible) Oct 5 04:15:50 localhost podman[72157]: 2025-10-05 08:15:50.452903649 +0000 UTC m=+0.223230299 container remove 1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_sammet, vendor=Red Hat, Inc., io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:15:50 localhost systemd[1]: libpod-conmon-04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339.scope: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: libpod-conmon-1b261850c3fec85e66f1cf8490ec172c2c9b00326de8522e56d931d81f763c80.scope: Deactivated successfully. Oct 5 04:15:50 localhost podman[72293]: 2025-10-05 08:15:50.532844511 +0000 UTC m=+0.026925033 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:15:50 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Oct 5 04:15:50 localhost podman[72293]: 2025-10-05 08:15:50.606998237 +0000 UTC m=+0.101078719 container create b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, container_name=setup_ovs_manager, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:15:50 localhost podman[72280]: 2025-10-05 08:15:50.613111133 +0000 UTC m=+0.155970301 container create 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:15:50 localhost podman[72280]: 2025-10-05 08:15:50.552389492 +0000 UTC m=+0.095248680 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:15:50 localhost systemd[1]: Started libpod-conmon-69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.scope. Oct 5 04:15:50 localhost systemd[1]: Started libcrun container. Oct 5 04:15:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d5660b8fd17c472ba639c36602afe3ef86a2b23ac8f1b2407f6d07d573e2fc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:50 localhost systemd[1]: Started libpod-conmon-b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246.scope. Oct 5 04:15:50 localhost systemd[1]: Started libcrun container. Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:15:50 localhost podman[72280]: 2025-10-05 08:15:50.729099625 +0000 UTC m=+0.271958813 container init 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target) Oct 5 04:15:50 localhost podman[72293]: 2025-10-05 08:15:50.732072015 +0000 UTC m=+0.226152507 container init b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:28:53, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}) Oct 5 04:15:50 localhost podman[72293]: 2025-10-05 08:15:50.740914796 +0000 UTC m=+0.234995258 container start b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=setup_ovs_manager, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:15:50 localhost podman[72293]: 2025-10-05 08:15:50.741296276 +0000 UTC m=+0.235376778 container attach b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T16:28:53, container_name=setup_ovs_manager, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, version=17.1.9, config_id=tripleo_step4) Oct 5 04:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:15:50 localhost podman[72280]: 2025-10-05 08:15:50.764375584 +0000 UTC m=+0.307234782 container start 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=) Oct 5 04:15:50 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-49720dbe515448afa07243eb8af1d9da9501d8bf4fb266e194f65378a3f3db49-merged.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c84fae4fdb19f246414fcc38f746fdb1742a85160abb4dde77cd014336470eec-userdata-shm.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a-merged.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04cb40ecc9993470acec1f4f7dcbe1f42b34186920fddb8ecd5537821d9d6339-userdata-shm.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: var-lib-containers-storage-overlay-dd3e36873754de2f75f0273b8ad3edd07407833f51500c218d0af79c4067c422-merged.mount: Deactivated successfully. Oct 5 04:15:50 localhost systemd[1]: tmp-crun.ni0DCw.mount: Deactivated successfully. Oct 5 04:15:50 localhost podman[72351]: 2025-10-05 08:15:50.911705407 +0000 UTC m=+0.140594972 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:15:51 localhost podman[72351]: 2025-10-05 08:15:51.234398068 +0000 UTC m=+0.463287583 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:15:51 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:15:51 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Oct 5 04:15:53 localhost ovs-vsctl[72541]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Oct 5 04:15:53 localhost systemd[1]: libpod-b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246.scope: Deactivated successfully. Oct 5 04:15:53 localhost systemd[1]: libpod-b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246.scope: Consumed 2.847s CPU time. Oct 5 04:15:53 localhost podman[72542]: 2025-10-05 08:15:53.669412727 +0000 UTC m=+0.056485536 container died b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, container_name=setup_ovs_manager, io.buildah.version=1.33.12, release=1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=) Oct 5 04:15:53 localhost systemd[1]: tmp-crun.KNb6BS.mount: Deactivated successfully. Oct 5 04:15:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246-userdata-shm.mount: Deactivated successfully. Oct 5 04:15:53 localhost systemd[1]: var-lib-containers-storage-overlay-32f9080afca125bdea732b66d70a39fe7d55069eaac1a486e6086cede937e213-merged.mount: Deactivated successfully. Oct 5 04:15:53 localhost podman[72542]: 2025-10-05 08:15:53.714198204 +0000 UTC m=+0.101270973 container cleanup b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9) Oct 5 04:15:53 localhost systemd[1]: libpod-conmon-b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246.scope: Deactivated successfully. Oct 5 04:15:53 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1759650341 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1759650341'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Oct 5 04:15:54 localhost podman[72638]: 2025-10-05 08:15:54.154959293 +0000 UTC m=+0.077141407 container create 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:15:54 localhost systemd[1]: Started libpod-conmon-2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.scope. Oct 5 04:15:54 localhost podman[72660]: 2025-10-05 08:15:54.202263719 +0000 UTC m=+0.085441534 container create 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 5 04:15:54 localhost podman[72638]: 2025-10-05 08:15:54.114325399 +0000 UTC m=+0.036507573 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 04:15:54 localhost systemd[1]: Started libcrun container. Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afdc6c9db300a9a5bad1fd5c74a09e603d29e9c3f62337bb3767b8218877207/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afdc6c9db300a9a5bad1fd5c74a09e603d29e9c3f62337bb3767b8218877207/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afdc6c9db300a9a5bad1fd5c74a09e603d29e9c3f62337bb3767b8218877207/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost systemd[1]: Started libpod-conmon-1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.scope. Oct 5 04:15:54 localhost systemd[1]: Started libcrun container. Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5eda9ac1c94d4c818199f85ccf1cfb1f0d26c0be01afb2a73d9178a056789/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5eda9ac1c94d4c818199f85ccf1cfb1f0d26c0be01afb2a73d9178a056789/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5eda9ac1c94d4c818199f85ccf1cfb1f0d26c0be01afb2a73d9178a056789/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 04:15:54 localhost podman[72660]: 2025-10-05 08:15:54.152074655 +0000 UTC m=+0.035252510 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:15:54 localhost podman[72675]: 2025-10-05 08:15:54.279324703 +0000 UTC m=+0.090010407 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:15:54 localhost podman[72638]: 2025-10-05 08:15:54.30237651 +0000 UTC m=+0.224558654 container init 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, release=1, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:54 localhost podman[72675]: 2025-10-05 08:15:54.318138958 +0000 UTC m=+0.128824652 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, version=17.1.9, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, release=2, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:15:54 localhost podman[72674]: 2025-10-05 08:15:54.328404247 +0000 UTC m=+0.139136032 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, release=1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:15:54 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:15:54 localhost podman[72674]: 2025-10-05 08:15:54.334157294 +0000 UTC m=+0.144889099 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:15:54 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:15:54 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:15:54 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:15:54 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:15:54 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:15:54 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:15:54 localhost podman[72660]: 2025-10-05 08:15:54.369418112 +0000 UTC m=+0.252595897 container init 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, release=1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:15:54 localhost podman[72638]: 2025-10-05 08:15:54.377512081 +0000 UTC m=+0.299694185 container start 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:54 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Oct 5 04:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:15:54 localhost podman[72660]: 2025-10-05 08:15:54.395614144 +0000 UTC m=+0.278791919 container start 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9) Oct 5 04:15:54 localhost python3[70199]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=61cb19106b923f6601e2c325a34cdd49 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Oct 5 04:15:54 localhost podman[72723]: 2025-10-05 08:15:54.402341236 +0000 UTC m=+0.068829161 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 5 04:15:54 localhost podman[72723]: 2025-10-05 08:15:54.413398857 +0000 UTC m=+0.079886782 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:15:54 localhost podman[72723]: unhealthy Oct 5 04:15:54 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:54 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:15:54 localhost systemd[72737]: Queued start job for default target Main User Target. Oct 5 04:15:54 localhost systemd[72737]: Created slice User Application Slice. Oct 5 04:15:54 localhost systemd[72737]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:15:54 localhost systemd[72737]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:15:54 localhost systemd[72737]: Reached target Paths. Oct 5 04:15:54 localhost systemd[72737]: Reached target Timers. Oct 5 04:15:54 localhost systemd[72737]: Starting D-Bus User Message Bus Socket... Oct 5 04:15:54 localhost systemd[72737]: Starting Create User's Volatile Files and Directories... Oct 5 04:15:54 localhost systemd[72737]: Listening on D-Bus User Message Bus Socket. Oct 5 04:15:54 localhost systemd[72737]: Reached target Sockets. Oct 5 04:15:54 localhost systemd[72737]: Finished Create User's Volatile Files and Directories. Oct 5 04:15:54 localhost systemd[72737]: Reached target Basic System. Oct 5 04:15:54 localhost systemd[72737]: Reached target Main User Target. Oct 5 04:15:54 localhost systemd[72737]: Startup finished in 178ms. Oct 5 04:15:54 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:15:54 localhost systemd[1]: Started Session c9 of User root. Oct 5 04:15:54 localhost podman[72764]: 2025-10-05 08:15:54.559727635 +0000 UTC m=+0.157213165 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, release=1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:15:54 localhost podman[72764]: 2025-10-05 08:15:54.600306267 +0000 UTC m=+0.197791797 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:15:54 localhost podman[72764]: unhealthy Oct 5 04:15:54 localhost systemd[1]: session-c9.scope: Deactivated successfully. Oct 5 04:15:54 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:15:54 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:15:54 localhost kernel: device br-int entered promiscuous mode Oct 5 04:15:54 localhost NetworkManager[5970]: [1759652154.6239] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Oct 5 04:15:54 localhost systemd-udevd[72837]: Network interface NamePolicy= disabled on kernel command line. Oct 5 04:15:55 localhost python3[72857]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:55 localhost python3[72873]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:55 localhost python3[72889]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:55 localhost kernel: device genev_sys_6081 entered promiscuous mode Oct 5 04:15:55 localhost NetworkManager[5970]: [1759652155.6557] device (genev_sys_6081): carrier: link connected Oct 5 04:15:55 localhost NetworkManager[5970]: [1759652155.6562] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Oct 5 04:15:55 localhost systemd-udevd[72839]: Network interface NamePolicy= disabled on kernel command line. Oct 5 04:15:55 localhost python3[72908]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:56 localhost python3[72924]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:56 localhost python3[72944]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:56 localhost python3[72960]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:56 localhost python3[72978]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[72996]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[73012]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[73028]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:57 localhost python3[73044]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:15:58 localhost python3[73105]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652157.8890932-110423-121414966606503/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:58 localhost python3[73134]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652157.8890932-110423-121414966606503/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:59 localhost python3[73163]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652157.8890932-110423-121414966606503/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:15:59 localhost python3[73192]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652157.8890932-110423-121414966606503/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:00 localhost python3[73221]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652157.8890932-110423-121414966606503/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:00 localhost python3[73250]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652157.8890932-110423-121414966606503/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:01 localhost python3[73266]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 04:16:01 localhost systemd[1]: Reloading. Oct 5 04:16:01 localhost systemd-sysv-generator[73292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:01 localhost systemd-rc-local-generator[73287]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:02 localhost python3[73318]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:02 localhost systemd[1]: Reloading. Oct 5 04:16:02 localhost systemd-rc-local-generator[73343]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:02 localhost systemd-sysv-generator[73346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:02 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 5 04:16:02 localhost tripleo-start-podman-container[73359]: Creating additional drop-in dependency for "ceilometer_agent_compute" (528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948) Oct 5 04:16:02 localhost systemd[1]: Reloading. Oct 5 04:16:02 localhost systemd-rc-local-generator[73415]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:02 localhost systemd-sysv-generator[73418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:03 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 5 04:16:03 localhost python3[73444]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:04 localhost systemd[1]: Reloading. Oct 5 04:16:04 localhost systemd-rc-local-generator[73471]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:04 localhost systemd-sysv-generator[73475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:04 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Oct 5 04:16:04 localhost systemd[1]: Started ceilometer_agent_ipmi container. Oct 5 04:16:04 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 04:16:04 localhost systemd[72737]: Activating special unit Exit the Session... Oct 5 04:16:04 localhost systemd[72737]: Stopped target Main User Target. Oct 5 04:16:04 localhost systemd[72737]: Stopped target Basic System. Oct 5 04:16:04 localhost systemd[72737]: Stopped target Paths. Oct 5 04:16:04 localhost systemd[72737]: Stopped target Sockets. Oct 5 04:16:04 localhost systemd[72737]: Stopped target Timers. Oct 5 04:16:04 localhost systemd[72737]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:16:04 localhost systemd[72737]: Closed D-Bus User Message Bus Socket. Oct 5 04:16:04 localhost systemd[72737]: Stopped Create User's Volatile Files and Directories. Oct 5 04:16:04 localhost systemd[72737]: Removed slice User Application Slice. Oct 5 04:16:04 localhost systemd[72737]: Reached target Shutdown. Oct 5 04:16:04 localhost systemd[72737]: Finished Exit the Session. Oct 5 04:16:04 localhost systemd[72737]: Reached target Exit the Session. Oct 5 04:16:04 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 04:16:04 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 04:16:04 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 04:16:04 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 04:16:04 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 04:16:04 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 04:16:04 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 04:16:05 localhost python3[73512]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:05 localhost systemd[1]: Reloading. Oct 5 04:16:05 localhost systemd-rc-local-generator[73540]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:05 localhost systemd-sysv-generator[73544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:05 localhost systemd[1]: Starting logrotate_crond container... Oct 5 04:16:05 localhost systemd[1]: Started logrotate_crond container. Oct 5 04:16:06 localhost python3[73581]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:06 localhost systemd[1]: Reloading. Oct 5 04:16:06 localhost systemd-rc-local-generator[73609]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:06 localhost systemd-sysv-generator[73613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:06 localhost systemd[1]: Starting nova_migration_target container... Oct 5 04:16:06 localhost systemd[1]: Started nova_migration_target container. Oct 5 04:16:07 localhost python3[73649]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:07 localhost systemd[1]: Reloading. Oct 5 04:16:07 localhost systemd-sysv-generator[73680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:07 localhost systemd-rc-local-generator[73677]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:07 localhost systemd[1]: Starting ovn_controller container... Oct 5 04:16:07 localhost tripleo-start-podman-container[73688]: Creating additional drop-in dependency for "ovn_controller" (2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c) Oct 5 04:16:07 localhost systemd[1]: Reloading. Oct 5 04:16:07 localhost systemd-rc-local-generator[73743]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:07 localhost systemd-sysv-generator[73747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:08 localhost systemd[1]: Started ovn_controller container. Oct 5 04:16:09 localhost python3[73772]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:16:09 localhost systemd[1]: Reloading. Oct 5 04:16:09 localhost systemd-sysv-generator[73804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:16:09 localhost systemd-rc-local-generator[73801]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:16:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:16:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:16:09 localhost systemd[1]: Starting ovn_metadata_agent container... Oct 5 04:16:09 localhost podman[73811]: 2025-10-05 08:16:09.452761339 +0000 UTC m=+0.088438335 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, distribution-scope=public, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:16:09 localhost systemd[1]: Started ovn_metadata_agent container. Oct 5 04:16:09 localhost podman[73811]: 2025-10-05 08:16:09.679000038 +0000 UTC m=+0.314677024 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-type=git, release=1, container_name=metrics_qdr, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:16:09 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:16:09 localhost python3[73881]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:11 localhost python3[74002]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005471152 step=4 update_config_hash_only=False Oct 5 04:16:11 localhost python3[74018]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:16:12 localhost python3[74034]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:16:20 localhost podman[74039]: 2025-10-05 08:16:20.923955465 +0000 UTC m=+0.079696158 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1) Oct 5 04:16:20 localhost podman[74037]: 2025-10-05 08:16:20.973812579 +0000 UTC m=+0.136469659 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:16:20 localhost podman[74039]: 2025-10-05 08:16:20.988245062 +0000 UTC m=+0.143985745 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Oct 5 04:16:21 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:16:21 localhost podman[74037]: 2025-10-05 08:16:21.039172906 +0000 UTC m=+0.201829956 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9) Oct 5 04:16:21 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:16:21 localhost podman[74038]: 2025-10-05 08:16:21.04151934 +0000 UTC m=+0.199081252 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-07-21T13:07:52, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:16:21 localhost podman[74038]: 2025-10-05 08:16:21.125197264 +0000 UTC m=+0.282759146 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1) Oct 5 04:16:21 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:16:21 localhost podman[74109]: 2025-10-05 08:16:21.910045515 +0000 UTC m=+0.079453731 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 04:16:22 localhost podman[74109]: 2025-10-05 08:16:22.28040167 +0000 UTC m=+0.449809926 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.9, tcib_managed=true) Oct 5 04:16:22 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:16:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:16:24 localhost systemd[1]: tmp-crun.j1np9B.mount: Deactivated successfully. Oct 5 04:16:24 localhost podman[74130]: 2025-10-05 08:16:24.93435068 +0000 UTC m=+0.098674413 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, architecture=x86_64, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:16:24 localhost podman[74132]: 2025-10-05 08:16:24.987416672 +0000 UTC m=+0.144764735 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Oct 5 04:16:24 localhost podman[74132]: 2025-10-05 08:16:24.996294993 +0000 UTC m=+0.153643066 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, distribution-scope=public) Oct 5 04:16:25 localhost podman[74130]: 2025-10-05 08:16:25.00427623 +0000 UTC m=+0.168599983 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Oct 5 04:16:25 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:16:25 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:16:25 localhost podman[74131]: 2025-10-05 08:16:25.086592668 +0000 UTC m=+0.245163955 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=ovn_controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64) Oct 5 04:16:25 localhost podman[74131]: 2025-10-05 08:16:25.106365855 +0000 UTC m=+0.264937132 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Oct 5 04:16:25 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:16:25 localhost podman[74138]: 2025-10-05 08:16:25.192638269 +0000 UTC m=+0.346886738 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, tcib_managed=true, version=17.1.9, release=2, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:16:25 localhost podman[74138]: 2025-10-05 08:16:25.22648244 +0000 UTC m=+0.380730879 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Oct 5 04:16:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:16:33 localhost snmpd[68888]: empty variable list in _query Oct 5 04:16:33 localhost snmpd[68888]: empty variable list in _query Oct 5 04:16:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:16:39 localhost podman[74216]: 2025-10-05 08:16:39.915478089 +0000 UTC m=+0.083290795 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, tcib_managed=true, version=17.1.9, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc.) Oct 5 04:16:40 localhost podman[74216]: 2025-10-05 08:16:40.134585034 +0000 UTC m=+0.302397670 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=metrics_qdr, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, version=17.1.9) Oct 5 04:16:40 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:16:51 localhost podman[74263]: 2025-10-05 08:16:51.489297345 +0000 UTC m=+0.081087017 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:16:51 localhost systemd[1]: tmp-crun.xLRqSU.mount: Deactivated successfully. Oct 5 04:16:51 localhost podman[74260]: 2025-10-05 08:16:51.552385849 +0000 UTC m=+0.150975454 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:16:51 localhost podman[74263]: 2025-10-05 08:16:51.570463704 +0000 UTC m=+0.162253446 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:16:51 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:16:51 localhost podman[74262]: 2025-10-05 08:16:51.609846512 +0000 UTC m=+0.204583604 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:16:51 localhost podman[74262]: 2025-10-05 08:16:51.616198222 +0000 UTC m=+0.210935314 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:16:51 localhost podman[74260]: 2025-10-05 08:16:51.625534352 +0000 UTC m=+0.224123957 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33) Oct 5 04:16:51 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:16:51 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:16:52 localhost podman[74381]: 2025-10-05 08:16:52.451038892 +0000 UTC m=+0.118670477 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:16:52 localhost podman[74381]: 2025-10-05 08:16:52.849041976 +0000 UTC m=+0.516673561 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64) Oct 5 04:16:52 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:16:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:16:55 localhost podman[74423]: 2025-10-05 08:16:55.895886823 +0000 UTC m=+0.058054708 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:16:55 localhost podman[74423]: 2025-10-05 08:16:55.908179134 +0000 UTC m=+0.070347069 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git) Oct 5 04:16:55 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:16:55 localhost podman[74421]: 2025-10-05 08:16:55.969931242 +0000 UTC m=+0.134451321 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:16:56 localhost podman[74421]: 2025-10-05 08:16:56.01308638 +0000 UTC m=+0.177606449 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1) Oct 5 04:16:56 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:16:56 localhost systemd[1]: tmp-crun.5BVQT4.mount: Deactivated successfully. Oct 5 04:16:56 localhost podman[74422]: 2025-10-05 08:16:56.046558558 +0000 UTC m=+0.209488074 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:16:56 localhost podman[74422]: 2025-10-05 08:16:56.071356184 +0000 UTC m=+0.234285760 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller) Oct 5 04:16:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:16:56 localhost podman[74424]: 2025-10-05 08:16:56.084172348 +0000 UTC m=+0.241046911 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=2, vcs-type=git, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:16:56 localhost podman[74424]: 2025-10-05 08:16:56.0961664 +0000 UTC m=+0.253040963 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, release=2, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03) Oct 5 04:16:56 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:17:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:17:10 localhost podman[74506]: 2025-10-05 08:17:10.913254334 +0000 UTC m=+0.083398139 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.9) Oct 5 04:17:11 localhost podman[74506]: 2025-10-05 08:17:11.093702839 +0000 UTC m=+0.263846634 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, release=1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:17:11 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:17:21 localhost podman[74536]: 2025-10-05 08:17:21.920270862 +0000 UTC m=+0.084758616 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 04:17:21 localhost podman[74536]: 2025-10-05 08:17:21.927708392 +0000 UTC m=+0.092196096 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, name=rhosp17/openstack-cron, architecture=x86_64, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-cron-container, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:17:21 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:17:21 localhost podman[74537]: 2025-10-05 08:17:21.977555671 +0000 UTC m=+0.139378553 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, release=1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9) Oct 5 04:17:22 localhost podman[74537]: 2025-10-05 08:17:22.00846665 +0000 UTC m=+0.170289542 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:17:22 localhost systemd[1]: tmp-crun.5iX7HL.mount: Deactivated successfully. Oct 5 04:17:22 localhost podman[74535]: 2025-10-05 08:17:22.020721709 +0000 UTC m=+0.187717330 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:17:22 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:17:22 localhost podman[74535]: 2025-10-05 08:17:22.050849968 +0000 UTC m=+0.217845629 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute) Oct 5 04:17:22 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:17:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:17:23 localhost podman[74608]: 2025-10-05 08:17:23.013851938 +0000 UTC m=+0.074690116 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, vcs-type=git, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 04:17:23 localhost podman[74608]: 2025-10-05 08:17:23.388245368 +0000 UTC m=+0.449083536 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_migration_target, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9) Oct 5 04:17:23 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:17:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:17:26 localhost podman[74631]: 2025-10-05 08:17:26.904322151 +0000 UTC m=+0.074780887 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:17:26 localhost systemd[1]: tmp-crun.FE66um.mount: Deactivated successfully. Oct 5 04:17:26 localhost podman[74632]: 2025-10-05 08:17:26.963418739 +0000 UTC m=+0.130990688 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team) Oct 5 04:17:26 localhost podman[74631]: 2025-10-05 08:17:26.973448037 +0000 UTC m=+0.143906723 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Oct 5 04:17:26 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:17:27 localhost podman[74632]: 2025-10-05 08:17:27.016460491 +0000 UTC m=+0.184032370 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible) Oct 5 04:17:27 localhost systemd[1]: tmp-crun.3M6GVD.mount: Deactivated successfully. Oct 5 04:17:27 localhost podman[74633]: 2025-10-05 08:17:27.028093255 +0000 UTC m=+0.192568241 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9) Oct 5 04:17:27 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:17:27 localhost podman[74633]: 2025-10-05 08:17:27.067141453 +0000 UTC m=+0.231616439 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:17:27 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:17:27 localhost podman[74636]: 2025-10-05 08:17:27.0681507 +0000 UTC m=+0.230537320 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, release=2, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, architecture=x86_64, com.redhat.component=openstack-collectd-container) Oct 5 04:17:27 localhost podman[74636]: 2025-10-05 08:17:27.153442409 +0000 UTC m=+0.315829099 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=) Oct 5 04:17:27 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:17:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:17:41 localhost systemd[1]: tmp-crun.vrcF8B.mount: Deactivated successfully. Oct 5 04:17:41 localhost podman[74714]: 2025-10-05 08:17:41.909783293 +0000 UTC m=+0.082964139 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, tcib_managed=true, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.9) Oct 5 04:17:42 localhost podman[74714]: 2025-10-05 08:17:42.1235074 +0000 UTC m=+0.296688256 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.33.12, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd) Oct 5 04:17:42 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:17:52 localhost podman[74744]: 2025-10-05 08:17:52.952364235 +0000 UTC m=+0.093453390 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, release=1, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc.) Oct 5 04:17:53 localhost podman[74744]: 2025-10-05 08:17:53.001126384 +0000 UTC m=+0.142215539 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:17:53 localhost podman[74745]: 2025-10-05 08:17:53.010260519 +0000 UTC m=+0.146081153 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, release=1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Oct 5 04:17:53 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:17:53 localhost podman[74745]: 2025-10-05 08:17:53.02218915 +0000 UTC m=+0.158009744 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1) Oct 5 04:17:53 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:17:53 localhost podman[74746]: 2025-10-05 08:17:53.108688412 +0000 UTC m=+0.241572927 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, release=1, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, vcs-type=git, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:17:53 localhost podman[74746]: 2025-10-05 08:17:53.136290682 +0000 UTC m=+0.269175197 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:17:53 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:17:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:17:53 localhost podman[74886]: 2025-10-05 08:17:53.653120976 +0000 UTC m=+0.090608753 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:17:53 localhost podman[74936]: 2025-10-05 08:17:53.83693804 +0000 UTC m=+0.087296495 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 04:17:53 localhost podman[74936]: 2025-10-05 08:17:53.934206131 +0000 UTC m=+0.184564546 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 04:17:54 localhost podman[74886]: 2025-10-05 08:17:54.086578562 +0000 UTC m=+0.524066349 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:17:54 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:17:57 localhost systemd[1]: tmp-crun.egIc3n.mount: Deactivated successfully. Oct 5 04:17:58 localhost podman[75082]: 2025-10-05 08:17:57.947660297 +0000 UTC m=+0.104118946 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:17:58 localhost podman[75084]: 2025-10-05 08:17:58.00885015 +0000 UTC m=+0.162398221 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, release=2, container_name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc.) Oct 5 04:17:58 localhost podman[75084]: 2025-10-05 08:17:58.015363234 +0000 UTC m=+0.168911315 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:04:03, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1) Oct 5 04:17:58 localhost podman[75083]: 2025-10-05 08:17:57.977995571 +0000 UTC m=+0.098828365 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, tcib_managed=true, batch=17.1_20250721.1) Oct 5 04:17:58 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:17:58 localhost podman[75082]: 2025-10-05 08:17:58.035077903 +0000 UTC m=+0.191536522 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:17:58 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:17:58 localhost podman[75083]: 2025-10-05 08:17:58.062165331 +0000 UTC m=+0.182998094 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:17:58 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:17:58 localhost podman[75081]: 2025-10-05 08:17:58.104667902 +0000 UTC m=+0.263605688 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1) Oct 5 04:17:58 localhost podman[75081]: 2025-10-05 08:17:58.150196714 +0000 UTC m=+0.309134530 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:17:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:18:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:18:12 localhost podman[75161]: 2025-10-05 08:18:12.909164776 +0000 UTC m=+0.076631088 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, release=1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:18:13 localhost podman[75161]: 2025-10-05 08:18:13.083164397 +0000 UTC m=+0.250630669 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., container_name=metrics_qdr) Oct 5 04:18:13 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:18:23 localhost podman[75189]: 2025-10-05 08:18:23.908529258 +0000 UTC m=+0.082455244 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:18:23 localhost systemd[1]: tmp-crun.zxVjeu.mount: Deactivated successfully. Oct 5 04:18:23 localhost podman[75189]: 2025-10-05 08:18:23.970313907 +0000 UTC m=+0.144239893 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:18:23 localhost podman[75190]: 2025-10-05 08:18:23.979326459 +0000 UTC m=+0.147166861 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:18:23 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:18:24 localhost systemd[1]: tmp-crun.rIGgvF.mount: Deactivated successfully. Oct 5 04:18:24 localhost podman[75191]: 2025-10-05 08:18:24.025268322 +0000 UTC m=+0.188936083 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12) Oct 5 04:18:24 localhost podman[75190]: 2025-10-05 08:18:24.040190913 +0000 UTC m=+0.208031285 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible) Oct 5 04:18:24 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:18:24 localhost podman[75191]: 2025-10-05 08:18:24.059180093 +0000 UTC m=+0.222847864 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, distribution-scope=public, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:18:24 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:18:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:18:24 localhost systemd[1]: tmp-crun.d2AjVe.mount: Deactivated successfully. Oct 5 04:18:24 localhost podman[75262]: 2025-10-05 08:18:24.918086418 +0000 UTC m=+0.086214654 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1, container_name=nova_migration_target, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:18:25 localhost podman[75262]: 2025-10-05 08:18:25.27427193 +0000 UTC m=+0.442400206 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 5 04:18:25 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:18:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:18:28 localhost systemd[1]: tmp-crun.jLccWX.mount: Deactivated successfully. Oct 5 04:18:28 localhost podman[75286]: 2025-10-05 08:18:28.923499527 +0000 UTC m=+0.087200931 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:18:28 localhost podman[75286]: 2025-10-05 08:18:28.970664253 +0000 UTC m=+0.134365647 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:18:28 localhost podman[75285]: 2025-10-05 08:18:28.98207771 +0000 UTC m=+0.146255557 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, release=1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:18:28 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:18:29 localhost podman[75287]: 2025-10-05 08:18:29.030017806 +0000 UTC m=+0.190151785 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:18:29 localhost podman[75287]: 2025-10-05 08:18:29.043114588 +0000 UTC m=+0.203248547 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, build-date=2025-07-21T13:27:15, vcs-type=git, version=17.1.9, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid) Oct 5 04:18:29 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:18:29 localhost podman[75288]: 2025-10-05 08:18:29.087359726 +0000 UTC m=+0.243288552 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, tcib_managed=true, batch=17.1_20250721.1, release=2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:18:29 localhost podman[75288]: 2025-10-05 08:18:29.096226654 +0000 UTC m=+0.252155500 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.buildah.version=1.33.12, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, release=2, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, tcib_managed=true) Oct 5 04:18:29 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:18:29 localhost podman[75285]: 2025-10-05 08:18:29.108630146 +0000 UTC m=+0.272807973 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:18:29 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:18:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:18:43 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:18:43 localhost recover_tripleo_nova_virtqemud[75366]: 63458 Oct 5 04:18:43 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:18:43 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:18:43 localhost systemd[1]: tmp-crun.E51l5a.mount: Deactivated successfully. Oct 5 04:18:43 localhost podman[75364]: 2025-10-05 08:18:43.902344743 +0000 UTC m=+0.074863811 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, release=1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:18:44 localhost podman[75364]: 2025-10-05 08:18:44.083832885 +0000 UTC m=+0.256351913 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 5 04:18:44 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:18:54 localhost podman[75396]: 2025-10-05 08:18:54.900285138 +0000 UTC m=+0.070824292 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=logrotate_crond, distribution-scope=public) Oct 5 04:18:54 localhost podman[75397]: 2025-10-05 08:18:54.922528025 +0000 UTC m=+0.086926134 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:18:54 localhost podman[75397]: 2025-10-05 08:18:54.946441628 +0000 UTC m=+0.110839687 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 5 04:18:54 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:18:54 localhost podman[75395]: 2025-10-05 08:18:54.966397033 +0000 UTC m=+0.136522336 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33) Oct 5 04:18:54 localhost podman[75395]: 2025-10-05 08:18:54.991126476 +0000 UTC m=+0.161251799 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:18:55 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:18:55 localhost podman[75396]: 2025-10-05 08:18:55.031038018 +0000 UTC m=+0.201577172 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9) Oct 5 04:18:55 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:18:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:18:55 localhost podman[75466]: 2025-10-05 08:18:55.911859102 +0000 UTC m=+0.079889145 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, container_name=nova_migration_target, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9) Oct 5 04:18:56 localhost podman[75466]: 2025-10-05 08:18:56.237952196 +0000 UTC m=+0.405982159 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37) Oct 5 04:18:56 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:18:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:18:59 localhost podman[75569]: 2025-10-05 08:18:59.911739973 +0000 UTC m=+0.078930799 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:18:59 localhost systemd[1]: tmp-crun.O4X3UV.mount: Deactivated successfully. Oct 5 04:18:59 localhost podman[75566]: 2025-10-05 08:18:59.929228242 +0000 UTC m=+0.096151121 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:18:59 localhost podman[75569]: 2025-10-05 08:18:59.948403507 +0000 UTC m=+0.115594313 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, version=17.1.9, maintainer=OpenStack TripleO Team, release=2, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Oct 5 04:18:59 localhost podman[75568]: 2025-10-05 08:18:59.975051863 +0000 UTC m=+0.140458102 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.buildah.version=1.33.12, container_name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:19:00 localhost podman[75568]: 2025-10-05 08:19:00.013168416 +0000 UTC m=+0.178574675 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, release=1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, container_name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1) Oct 5 04:19:00 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:19:00 localhost podman[75566]: 2025-10-05 08:19:00.04795194 +0000 UTC m=+0.214874839 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:19:00 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:19:00 localhost podman[75567]: 2025-10-05 08:19:00.018684974 +0000 UTC m=+0.186356773 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 5 04:19:00 localhost podman[75567]: 2025-10-05 08:19:00.099133164 +0000 UTC m=+0.266804923 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public) Oct 5 04:19:00 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:19:00 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:19:05 localhost python3[75699]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:06 localhost python3[75744]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652345.620546-114912-212474111518856/source _original_basename=tmpc02059gy follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:07 localhost python3[75774]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:19:09 localhost ansible-async_wrapper.py[75946]: Invoked with 873882488799 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652348.7099407-115108-147772895841413/AnsiballZ_command.py _ Oct 5 04:19:09 localhost ansible-async_wrapper.py[75949]: Starting module and watcher Oct 5 04:19:09 localhost ansible-async_wrapper.py[75949]: Start watching 75950 (3600) Oct 5 04:19:09 localhost ansible-async_wrapper.py[75950]: Start module (75950) Oct 5 04:19:09 localhost ansible-async_wrapper.py[75946]: Return async_wrapper task started. Oct 5 04:19:09 localhost python3[75970]: ansible-ansible.legacy.async_status Invoked with jid=873882488799.75946 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:19:13 localhost puppet-user[75962]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Oct 5 04:19:13 localhost puppet-user[75962]: (file: /etc/puppet/hiera.yaml) Oct 5 04:19:13 localhost puppet-user[75962]: Warning: Undefined variable '::deploy_config_name'; Oct 5 04:19:13 localhost puppet-user[75962]: (file & line not available) Oct 5 04:19:13 localhost puppet-user[75962]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Oct 5 04:19:13 localhost puppet-user[75962]: (file & line not available) Oct 5 04:19:13 localhost puppet-user[75962]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Oct 5 04:19:13 localhost puppet-user[75962]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75962]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75962]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75962]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75962]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75962]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75962]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75962]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75962]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75962]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75962]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75962]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75962]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75962]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75962]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Oct 5 04:19:13 localhost puppet-user[75962]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Oct 5 04:19:13 localhost puppet-user[75962]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Oct 5 04:19:13 localhost puppet-user[75962]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Oct 5 04:19:13 localhost puppet-user[75962]: Notice: Compiled catalog for np0005471152.localdomain in environment production in 0.21 seconds Oct 5 04:19:13 localhost puppet-user[75962]: Notice: Applied catalog in 0.28 seconds Oct 5 04:19:13 localhost puppet-user[75962]: Application: Oct 5 04:19:13 localhost puppet-user[75962]: Initial environment: production Oct 5 04:19:13 localhost puppet-user[75962]: Converged environment: production Oct 5 04:19:13 localhost puppet-user[75962]: Run mode: user Oct 5 04:19:13 localhost puppet-user[75962]: Changes: Oct 5 04:19:13 localhost puppet-user[75962]: Events: Oct 5 04:19:13 localhost puppet-user[75962]: Resources: Oct 5 04:19:13 localhost puppet-user[75962]: Total: 19 Oct 5 04:19:13 localhost puppet-user[75962]: Time: Oct 5 04:19:13 localhost puppet-user[75962]: Schedule: 0.00 Oct 5 04:19:13 localhost puppet-user[75962]: Package: 0.00 Oct 5 04:19:13 localhost puppet-user[75962]: Exec: 0.01 Oct 5 04:19:13 localhost puppet-user[75962]: Augeas: 0.01 Oct 5 04:19:13 localhost puppet-user[75962]: File: 0.02 Oct 5 04:19:13 localhost puppet-user[75962]: Service: 0.05 Oct 5 04:19:13 localhost puppet-user[75962]: Config retrieval: 0.27 Oct 5 04:19:13 localhost puppet-user[75962]: Transaction evaluation: 0.27 Oct 5 04:19:13 localhost puppet-user[75962]: Catalog application: 0.28 Oct 5 04:19:13 localhost puppet-user[75962]: Last run: 1759652353 Oct 5 04:19:13 localhost puppet-user[75962]: Filebucket: 0.00 Oct 5 04:19:13 localhost puppet-user[75962]: Total: 0.28 Oct 5 04:19:13 localhost puppet-user[75962]: Version: Oct 5 04:19:13 localhost puppet-user[75962]: Config: 1759652353 Oct 5 04:19:13 localhost puppet-user[75962]: Puppet: 7.10.0 Oct 5 04:19:13 localhost ansible-async_wrapper.py[75950]: Module complete (75950) Oct 5 04:19:14 localhost ansible-async_wrapper.py[75949]: Done in kid B. Oct 5 04:19:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:19:14 localhost podman[76094]: 2025-10-05 08:19:14.419381903 +0000 UTC m=+0.085156638 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true) Oct 5 04:19:14 localhost podman[76094]: 2025-10-05 08:19:14.609277889 +0000 UTC m=+0.275052664 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:19:14 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:19:19 localhost python3[76139]: ansible-ansible.legacy.async_status Invoked with jid=873882488799.75946 mode=status _async_dir=/tmp/.ansible_async Oct 5 04:19:20 localhost python3[76155]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:19:21 localhost python3[76171]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:19:21 localhost python3[76221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:21 localhost python3[76239]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpaazj5p9e recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Oct 5 04:19:22 localhost python3[76269]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:23 localhost python3[76374]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Oct 5 04:19:24 localhost python3[76393]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:19:25 localhost podman[76426]: 2025-10-05 08:19:25.207462754 +0000 UTC m=+0.081073437 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, version=17.1.9, container_name=ceilometer_agent_compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:19:25 localhost podman[76428]: 2025-10-05 08:19:25.242990007 +0000 UTC m=+0.113133807 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4) Oct 5 04:19:25 localhost podman[76426]: 2025-10-05 08:19:25.263303403 +0000 UTC m=+0.136914146 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:19:25 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:19:25 localhost python3[76425]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:19:25 localhost podman[76428]: 2025-10-05 08:19:25.298259571 +0000 UTC m=+0.168403401 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:19:25 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:19:25 localhost podman[76427]: 2025-10-05 08:19:25.305683981 +0000 UTC m=+0.176291163 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:19:25 localhost podman[76427]: 2025-10-05 08:19:25.3902275 +0000 UTC m=+0.260834712 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-cron) Oct 5 04:19:25 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:19:25 localhost python3[76542]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:26 localhost python3[76560]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:19:26 localhost podman[76623]: 2025-10-05 08:19:26.579908556 +0000 UTC m=+0.080235815 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, container_name=nova_migration_target, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public) Oct 5 04:19:26 localhost python3[76622]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:26 localhost python3[76664]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:26 localhost podman[76623]: 2025-10-05 08:19:26.935200502 +0000 UTC m=+0.435527821 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:19:26 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:19:27 localhost python3[76726]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:27 localhost python3[76744]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:28 localhost python3[76806]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:28 localhost python3[76824]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:29 localhost python3[76854]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:19:29 localhost systemd[1]: Reloading. Oct 5 04:19:29 localhost systemd-rc-local-generator[76876]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:19:29 localhost systemd-sysv-generator[76882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:19:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:19:30 localhost python3[76940]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:19:30 localhost podman[76959]: 2025-10-05 08:19:30.300865479 +0000 UTC m=+0.080395139 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, release=1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:19:30 localhost podman[76961]: 2025-10-05 08:19:30.317914747 +0000 UTC m=+0.087959022 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, container_name=iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12) Oct 5 04:19:30 localhost podman[76961]: 2025-10-05 08:19:30.353223665 +0000 UTC m=+0.123267920 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-iscsid, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team) Oct 5 04:19:30 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:19:30 localhost python3[76958]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:30 localhost podman[76959]: 2025-10-05 08:19:30.405178549 +0000 UTC m=+0.184708209 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:19:30 localhost podman[76960]: 2025-10-05 08:19:30.418168058 +0000 UTC m=+0.192593961 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:19:30 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:19:30 localhost podman[76967]: 2025-10-05 08:19:30.358407763 +0000 UTC m=+0.127470102 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, com.redhat.component=openstack-collectd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd) Oct 5 04:19:30 localhost podman[76960]: 2025-10-05 08:19:30.465197641 +0000 UTC m=+0.239623524 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:19:30 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:19:30 localhost podman[76967]: 2025-10-05 08:19:30.49312948 +0000 UTC m=+0.262191839 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, tcib_managed=true, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12) Oct 5 04:19:30 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:19:30 localhost python3[77110]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Oct 5 04:19:31 localhost python3[77128]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:19:31 localhost python3[77158]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:19:31 localhost systemd[1]: Reloading. Oct 5 04:19:31 localhost systemd-rc-local-generator[77184]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:19:31 localhost systemd-sysv-generator[77188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:19:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:19:32 localhost systemd[1]: Starting Create netns directory... Oct 5 04:19:32 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 04:19:32 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 04:19:32 localhost systemd[1]: Finished Create netns directory. Oct 5 04:19:32 localhost python3[77216]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Oct 5 04:19:34 localhost python3[77275]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Oct 5 04:19:34 localhost podman[77313]: 2025-10-05 08:19:34.752105227 +0000 UTC m=+0.075640542 container create 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, version=17.1.9, config_id=tripleo_step5, container_name=nova_compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:19:34 localhost systemd[1]: Started libpod-conmon-700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.scope. Oct 5 04:19:34 localhost systemd[1]: Started libcrun container. Oct 5 04:19:34 localhost podman[77313]: 2025-10-05 08:19:34.711324412 +0000 UTC m=+0.034859787 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa25cf1e4bebd8611528fd028e796ee83b34c4bc80959cdc10d28c4b2f1ae/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa25cf1e4bebd8611528fd028e796ee83b34c4bc80959cdc10d28c4b2f1ae/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa25cf1e4bebd8611528fd028e796ee83b34c4bc80959cdc10d28c4b2f1ae/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa25cf1e4bebd8611528fd028e796ee83b34c4bc80959cdc10d28c4b2f1ae/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbaa25cf1e4bebd8611528fd028e796ee83b34c4bc80959cdc10d28c4b2f1ae/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:19:34 localhost podman[77313]: 2025-10-05 08:19:34.834874708 +0000 UTC m=+0.158410043 container init 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:19:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:19:34 localhost podman[77313]: 2025-10-05 08:19:34.866504328 +0000 UTC m=+0.190039683 container start 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:19:34 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Oct 5 04:19:34 localhost python3[77275]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:19:34 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 04:19:34 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 04:19:34 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 04:19:34 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 04:19:34 localhost podman[77334]: 2025-10-05 08:19:34.970275733 +0000 UTC m=+0.097700243 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Oct 5 04:19:35 localhost podman[77334]: 2025-10-05 08:19:35.016240927 +0000 UTC m=+0.143665477 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step5, distribution-scope=public, vcs-type=git) Oct 5 04:19:35 localhost podman[77334]: unhealthy Oct 5 04:19:35 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:19:35 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 04:19:35 localhost systemd[77351]: Queued start job for default target Main User Target. Oct 5 04:19:35 localhost systemd[77351]: Created slice User Application Slice. Oct 5 04:19:35 localhost systemd[77351]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 04:19:35 localhost systemd[77351]: Started Daily Cleanup of User's Temporary Directories. Oct 5 04:19:35 localhost systemd[77351]: Reached target Paths. Oct 5 04:19:35 localhost systemd[77351]: Reached target Timers. Oct 5 04:19:35 localhost systemd[77351]: Starting D-Bus User Message Bus Socket... Oct 5 04:19:35 localhost systemd[77351]: Starting Create User's Volatile Files and Directories... Oct 5 04:19:35 localhost systemd[77351]: Finished Create User's Volatile Files and Directories. Oct 5 04:19:35 localhost systemd[77351]: Listening on D-Bus User Message Bus Socket. Oct 5 04:19:35 localhost systemd[77351]: Reached target Sockets. Oct 5 04:19:35 localhost systemd[77351]: Reached target Basic System. Oct 5 04:19:35 localhost systemd[77351]: Reached target Main User Target. Oct 5 04:19:35 localhost systemd[77351]: Startup finished in 131ms. Oct 5 04:19:35 localhost systemd[1]: Started User Manager for UID 0. Oct 5 04:19:35 localhost systemd[1]: Started Session c10 of User root. Oct 5 04:19:35 localhost systemd[1]: session-c10.scope: Deactivated successfully. Oct 5 04:19:35 localhost podman[77436]: 2025-10-05 08:19:35.397888201 +0000 UTC m=+0.088786583 container create 464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, release=1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:19:35 localhost systemd[1]: Started libpod-conmon-464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9.scope. Oct 5 04:19:35 localhost podman[77436]: 2025-10-05 08:19:35.353196342 +0000 UTC m=+0.044094774 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:19:35 localhost systemd[1]: Started libcrun container. Oct 5 04:19:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79319d12525dee5bc50b02f4506bc7bc6e833cf5798b23ca8359393e14a5b8e7/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79319d12525dee5bc50b02f4506bc7bc6e833cf5798b23ca8359393e14a5b8e7/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Oct 5 04:19:35 localhost podman[77436]: 2025-10-05 08:19:35.475857074 +0000 UTC m=+0.166755436 container init 464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_wait_for_compute_service, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public) Oct 5 04:19:35 localhost podman[77436]: 2025-10-05 08:19:35.483278764 +0000 UTC m=+0.174177146 container start 464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, version=17.1.9, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, tcib_managed=true, managed_by=tripleo_ansible, release=1, container_name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:19:35 localhost podman[77436]: 2025-10-05 08:19:35.48351337 +0000 UTC m=+0.174411742 container attach 464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, release=1, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, architecture=x86_64, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37) Oct 5 04:19:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:19:44 localhost systemd[1]: tmp-crun.3RXvmT.mount: Deactivated successfully. Oct 5 04:19:44 localhost podman[77459]: 2025-10-05 08:19:44.93084011 +0000 UTC m=+0.094750324 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, release=1, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:19:45 localhost podman[77459]: 2025-10-05 08:19:45.114145611 +0000 UTC m=+0.278055915 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:19:45 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:19:45 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 04:19:45 localhost systemd[77351]: Activating special unit Exit the Session... Oct 5 04:19:45 localhost systemd[77351]: Stopped target Main User Target. Oct 5 04:19:45 localhost systemd[77351]: Stopped target Basic System. Oct 5 04:19:45 localhost systemd[77351]: Stopped target Paths. Oct 5 04:19:45 localhost systemd[77351]: Stopped target Sockets. Oct 5 04:19:45 localhost systemd[77351]: Stopped target Timers. Oct 5 04:19:45 localhost systemd[77351]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:19:45 localhost systemd[77351]: Closed D-Bus User Message Bus Socket. Oct 5 04:19:45 localhost systemd[77351]: Stopped Create User's Volatile Files and Directories. Oct 5 04:19:45 localhost systemd[77351]: Removed slice User Application Slice. Oct 5 04:19:45 localhost systemd[77351]: Reached target Shutdown. Oct 5 04:19:45 localhost systemd[77351]: Finished Exit the Session. Oct 5 04:19:45 localhost systemd[77351]: Reached target Exit the Session. Oct 5 04:19:45 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 04:19:45 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 04:19:45 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 04:19:45 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 04:19:45 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 04:19:45 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 04:19:45 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 04:19:52 localhost systemd[1]: session-27.scope: Deactivated successfully. Oct 5 04:19:52 localhost systemd[1]: session-27.scope: Consumed 3.082s CPU time. Oct 5 04:19:52 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Oct 5 04:19:52 localhost systemd-logind[760]: Removed session 27. Oct 5 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:19:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:19:55 localhost systemd[1]: tmp-crun.jpMrHI.mount: Deactivated successfully. Oct 5 04:19:55 localhost podman[77490]: 2025-10-05 08:19:55.910488873 +0000 UTC m=+0.081301882 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Oct 5 04:19:55 localhost podman[77491]: 2025-10-05 08:19:55.963393124 +0000 UTC m=+0.131958974 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, release=1, tcib_managed=true, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc.) Oct 5 04:19:55 localhost podman[77490]: 2025-10-05 08:19:55.969186389 +0000 UTC m=+0.139999368 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:19:55 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:19:56 localhost podman[77492]: 2025-10-05 08:19:56.025991684 +0000 UTC m=+0.192772035 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible) Oct 5 04:19:56 localhost podman[77491]: 2025-10-05 08:19:56.051924339 +0000 UTC m=+0.220490129 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:19:56 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:19:56 localhost podman[77492]: 2025-10-05 08:19:56.081239387 +0000 UTC m=+0.248019688 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:19:56 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:19:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:19:57 localhost systemd[1]: tmp-crun.SAVvrU.mount: Deactivated successfully. Oct 5 04:19:57 localhost podman[77578]: 2025-10-05 08:19:57.696009322 +0000 UTC m=+0.087681864 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, container_name=nova_migration_target, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git) Oct 5 04:19:58 localhost podman[77578]: 2025-10-05 08:19:58.090091791 +0000 UTC m=+0.481764293 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, container_name=nova_migration_target, vcs-type=git, release=1, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:19:58 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:20:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:20:00 localhost podman[77663]: 2025-10-05 08:20:00.926913061 +0000 UTC m=+0.089502544 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:20:00 localhost podman[77663]: 2025-10-05 08:20:00.952147788 +0000 UTC m=+0.114737291 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, config_id=tripleo_step4) Oct 5 04:20:00 localhost podman[77664]: 2025-10-05 08:20:00.962835095 +0000 UTC m=+0.121638165 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, version=17.1.9, release=1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:20:00 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:20:00 localhost podman[77664]: 2025-10-05 08:20:00.975981569 +0000 UTC m=+0.134784669 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9) Oct 5 04:20:00 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:20:01 localhost podman[77662]: 2025-10-05 08:20:01.019878877 +0000 UTC m=+0.182195402 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 04:20:01 localhost podman[77665]: 2025-10-05 08:20:01.031383286 +0000 UTC m=+0.186587730 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=2, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd) Oct 5 04:20:01 localhost podman[77665]: 2025-10-05 08:20:01.072301714 +0000 UTC m=+0.227506198 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=2, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:20:01 localhost podman[77662]: 2025-10-05 08:20:01.081093 +0000 UTC m=+0.243409505 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public) Oct 5 04:20:01 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:20:01 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:20:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:20:05 localhost podman[77747]: 2025-10-05 08:20:05.914898057 +0000 UTC m=+0.082882756 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true) Oct 5 04:20:05 localhost podman[77747]: 2025-10-05 08:20:05.976235433 +0000 UTC m=+0.144220142 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-nova-compute-container) Oct 5 04:20:05 localhost podman[77747]: unhealthy Oct 5 04:20:05 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:20:05 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 04:20:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:20:15 localhost podman[77769]: 2025-10-05 08:20:15.908744536 +0000 UTC m=+0.075204250 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, config_id=tripleo_step1, release=1, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:20:16 localhost podman[77769]: 2025-10-05 08:20:16.109579197 +0000 UTC m=+0.276038901 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:20:16 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:20:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:20:26 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:20:26 localhost recover_tripleo_nova_virtqemud[77811]: 63458 Oct 5 04:20:26 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:20:26 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:20:26 localhost podman[77798]: 2025-10-05 08:20:26.904037209 +0000 UTC m=+0.073093493 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-cron-container, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:20:26 localhost podman[77799]: 2025-10-05 08:20:26.920338217 +0000 UTC m=+0.083427881 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1) Oct 5 04:20:26 localhost podman[77799]: 2025-10-05 08:20:26.967706908 +0000 UTC m=+0.130796542 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:20:26 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:20:26 localhost podman[77797]: 2025-10-05 08:20:26.984419837 +0000 UTC m=+0.152422442 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:20:27 localhost podman[77798]: 2025-10-05 08:20:27.033814873 +0000 UTC m=+0.202871177 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, release=1, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:20:27 localhost podman[77797]: 2025-10-05 08:20:27.071563436 +0000 UTC m=+0.239566111 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:45:33, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:20:27 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:20:27 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:20:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:20:28 localhost systemd[1]: tmp-crun.2joepm.mount: Deactivated successfully. Oct 5 04:20:28 localhost podman[77870]: 2025-10-05 08:20:28.920418517 +0000 UTC m=+0.088155967 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:20:29 localhost podman[77870]: 2025-10-05 08:20:29.308268628 +0000 UTC m=+0.476006038 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_migration_target, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:20:29 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:20:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:20:31 localhost podman[77895]: 2025-10-05 08:20:31.918747883 +0000 UTC m=+0.078701184 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team) Oct 5 04:20:31 localhost podman[77895]: 2025-10-05 08:20:31.952395896 +0000 UTC m=+0.112349187 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:20:31 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:20:32 localhost systemd[1]: tmp-crun.Z3PLlm.mount: Deactivated successfully. Oct 5 04:20:32 localhost podman[77894]: 2025-10-05 08:20:32.023314861 +0000 UTC m=+0.185961514 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, distribution-scope=public) Oct 5 04:20:32 localhost podman[77894]: 2025-10-05 08:20:32.041133459 +0000 UTC m=+0.203780152 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:20:32 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:20:32 localhost podman[77893]: 2025-10-05 08:20:32.078909963 +0000 UTC m=+0.245553523 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:20:32 localhost podman[77893]: 2025-10-05 08:20:32.135533113 +0000 UTC m=+0.302176703 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:20:32 localhost podman[77896]: 2025-10-05 08:20:32.135946544 +0000 UTC m=+0.292973786 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=2, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:20:32 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:20:32 localhost podman[77896]: 2025-10-05 08:20:32.215335505 +0000 UTC m=+0.372362727 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=2) Oct 5 04:20:32 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:20:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:20:36 localhost systemd[1]: tmp-crun.EqwFMH.mount: Deactivated successfully. Oct 5 04:20:36 localhost podman[77976]: 2025-10-05 08:20:36.923254512 +0000 UTC m=+0.091517348 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_compute, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:20:37 localhost podman[77976]: 2025-10-05 08:20:37.00512175 +0000 UTC m=+0.173384556 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, distribution-scope=public) Oct 5 04:20:37 localhost podman[77976]: unhealthy Oct 5 04:20:37 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:20:37 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 04:20:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:20:46 localhost podman[77997]: 2025-10-05 08:20:46.939268177 +0000 UTC m=+0.108573545 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:20:47 localhost podman[77997]: 2025-10-05 08:20:47.13417256 +0000 UTC m=+0.303477908 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:20:47 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:20:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:20:57 localhost podman[78027]: 2025-10-05 08:20:57.920216976 +0000 UTC m=+0.084357125 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, release=1, version=17.1.9, name=rhosp17/openstack-cron, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4) Oct 5 04:20:57 localhost podman[78027]: 2025-10-05 08:20:57.932119815 +0000 UTC m=+0.096259984 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:20:57 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:20:58 localhost podman[78026]: 2025-10-05 08:20:58.028207805 +0000 UTC m=+0.195612522 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4) Oct 5 04:20:58 localhost podman[78028]: 2025-10-05 08:20:58.0824158 +0000 UTC m=+0.243603780 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true) Oct 5 04:20:58 localhost podman[78028]: 2025-10-05 08:20:58.110659978 +0000 UTC m=+0.271847938 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:20:58 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:20:58 localhost podman[78026]: 2025-10-05 08:20:58.135632628 +0000 UTC m=+0.303037335 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12) Oct 5 04:20:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:20:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:20:59 localhost podman[78159]: 2025-10-05 08:20:59.911730155 +0000 UTC m=+0.078638322 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:21:00 localhost podman[78159]: 2025-10-05 08:21:00.283270709 +0000 UTC m=+0.450178866 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 04:21:00 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:21:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:21:02 localhost podman[78198]: 2025-10-05 08:21:02.918430355 +0000 UTC m=+0.082649039 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true) Oct 5 04:21:02 localhost podman[78199]: 2025-10-05 08:21:02.930370765 +0000 UTC m=+0.090529920 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:27:15, container_name=iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:21:02 localhost podman[78197]: 2025-10-05 08:21:02.989831399 +0000 UTC m=+0.155546123 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, release=1, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:21:02 localhost podman[78198]: 2025-10-05 08:21:02.998305766 +0000 UTC m=+0.162524460 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public) Oct 5 04:21:03 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:21:03 localhost podman[78199]: 2025-10-05 08:21:03.0421251 +0000 UTC m=+0.202284315 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:21:03 localhost podman[78197]: 2025-10-05 08:21:03.054514843 +0000 UTC m=+0.220229597 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:21:03 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:21:03 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:21:03 localhost podman[78200]: 2025-10-05 08:21:03.082494632 +0000 UTC m=+0.241099055 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, container_name=collectd) Oct 5 04:21:03 localhost podman[78200]: 2025-10-05 08:21:03.117401958 +0000 UTC m=+0.276006361 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, container_name=collectd, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd) Oct 5 04:21:03 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:21:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:21:07 localhost podman[78282]: 2025-10-05 08:21:07.915091696 +0000 UTC m=+0.082690698 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, batch=17.1_20250721.1) Oct 5 04:21:07 localhost podman[78282]: 2025-10-05 08:21:07.974439316 +0000 UTC m=+0.142038358 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:21:07 localhost podman[78282]: unhealthy Oct 5 04:21:07 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:21:07 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 04:21:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:21:17 localhost podman[78304]: 2025-10-05 08:21:17.913844573 +0000 UTC m=+0.081688390 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, version=17.1.9) Oct 5 04:21:18 localhost podman[78304]: 2025-10-05 08:21:18.132448532 +0000 UTC m=+0.300292399 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:07:59, vcs-type=git, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:21:18 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:21:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:21:28 localhost podman[78333]: 2025-10-05 08:21:28.923322342 +0000 UTC m=+0.089192262 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, architecture=x86_64, tcib_managed=true, release=1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=) Oct 5 04:21:28 localhost podman[78334]: 2025-10-05 08:21:28.966052187 +0000 UTC m=+0.130377805 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:21:28 localhost podman[78334]: 2025-10-05 08:21:28.999019421 +0000 UTC m=+0.163345049 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.9, container_name=logrotate_crond, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52) Oct 5 04:21:29 localhost systemd[1]: tmp-crun.lEJOPz.mount: Deactivated successfully. Oct 5 04:21:29 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:21:29 localhost podman[78335]: 2025-10-05 08:21:29.014453385 +0000 UTC m=+0.174157620 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4) Oct 5 04:21:29 localhost podman[78335]: 2025-10-05 08:21:29.043059841 +0000 UTC m=+0.202764116 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:21:29 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:21:29 localhost podman[78333]: 2025-10-05 08:21:29.095476816 +0000 UTC m=+0.261346716 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, version=17.1.9, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64) Oct 5 04:21:29 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:21:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:21:30 localhost podman[78404]: 2025-10-05 08:21:30.913432961 +0000 UTC m=+0.081054303 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:21:31 localhost podman[78404]: 2025-10-05 08:21:31.277829668 +0000 UTC m=+0.445450990 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, container_name=nova_migration_target, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:21:31 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:21:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:21:33 localhost podman[78428]: 2025-10-05 08:21:33.92213093 +0000 UTC m=+0.092107628 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public) Oct 5 04:21:33 localhost podman[78428]: 2025-10-05 08:21:33.964064664 +0000 UTC m=+0.134041372 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:21:33 localhost systemd[1]: tmp-crun.VS33cr.mount: Deactivated successfully. Oct 5 04:21:33 localhost podman[78430]: 2025-10-05 08:21:33.976351874 +0000 UTC m=+0.139839499 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, distribution-scope=public, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, container_name=iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team) Oct 5 04:21:33 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:21:33 localhost podman[78430]: 2025-10-05 08:21:33.989179368 +0000 UTC m=+0.152667023 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, distribution-scope=public, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid) Oct 5 04:21:34 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:21:34 localhost podman[78429]: 2025-10-05 08:21:34.077141216 +0000 UTC m=+0.243392965 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:21:34 localhost podman[78429]: 2025-10-05 08:21:34.129279353 +0000 UTC m=+0.295531102 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64) Oct 5 04:21:34 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:21:34 localhost podman[78436]: 2025-10-05 08:21:34.129881719 +0000 UTC m=+0.289811859 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=2, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:21:34 localhost podman[78436]: 2025-10-05 08:21:34.209868913 +0000 UTC m=+0.369799013 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9) Oct 5 04:21:34 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:21:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:21:34 localhost recover_tripleo_nova_virtqemud[78514]: 63458 Oct 5 04:21:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:21:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:21:34 localhost systemd[1]: tmp-crun.Nacghh.mount: Deactivated successfully. Oct 5 04:21:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:21:38 localhost podman[78515]: 2025-10-05 08:21:38.91310316 +0000 UTC m=+0.079662156 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_id=tripleo_step5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, container_name=nova_compute, architecture=x86_64, release=1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:21:38 localhost podman[78515]: 2025-10-05 08:21:38.970625672 +0000 UTC m=+0.137184678 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_compute, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git) Oct 5 04:21:38 localhost podman[78515]: unhealthy Oct 5 04:21:38 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:21:38 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 04:21:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:21:48 localhost podman[78537]: 2025-10-05 08:21:48.918044544 +0000 UTC m=+0.084658161 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 5 04:21:49 localhost podman[78537]: 2025-10-05 08:21:49.114208402 +0000 UTC m=+0.280822009 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1) Oct 5 04:21:49 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:21:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:21:59 localhost podman[78567]: 2025-10-05 08:21:59.935150756 +0000 UTC m=+0.094604727 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:21:59 localhost podman[78567]: 2025-10-05 08:21:59.96737793 +0000 UTC m=+0.126831951 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:21:59 localhost systemd[1]: tmp-crun.Ioc52M.mount: Deactivated successfully. Oct 5 04:21:59 localhost podman[78568]: 2025-10-05 08:21:59.980820041 +0000 UTC m=+0.135113313 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1) Oct 5 04:21:59 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:22:00 localhost podman[78568]: 2025-10-05 08:22:00.010146016 +0000 UTC m=+0.164439298 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:22:00 localhost podman[78566]: 2025-10-05 08:22:00.02295981 +0000 UTC m=+0.184807564 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Oct 5 04:22:00 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:22:00 localhost podman[78566]: 2025-10-05 08:22:00.048298739 +0000 UTC m=+0.210146523 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, version=17.1.9, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:22:00 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:22:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:22:01 localhost podman[78702]: 2025-10-05 08:22:01.906081882 +0000 UTC m=+0.074066188 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-07-21T14:48:37, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public) Oct 5 04:22:02 localhost podman[78702]: 2025-10-05 08:22:02.286672693 +0000 UTC m=+0.454657059 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:22:02 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:22:04 localhost podman[78741]: 2025-10-05 08:22:04.93114831 +0000 UTC m=+0.101269035 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vcs-type=git, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:22:04 localhost podman[78742]: 2025-10-05 08:22:04.981340635 +0000 UTC m=+0.148432379 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible) Oct 5 04:22:05 localhost podman[78742]: 2025-10-05 08:22:05.031095158 +0000 UTC m=+0.198186902 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:22:05 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:22:05 localhost podman[78741]: 2025-10-05 08:22:05.053791546 +0000 UTC m=+0.223912251 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:22:05 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:22:05 localhost podman[78749]: 2025-10-05 08:22:05.036059121 +0000 UTC m=+0.193577419 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=2, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:22:05 localhost podman[78749]: 2025-10-05 08:22:05.167204077 +0000 UTC m=+0.324722315 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, tcib_managed=true, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, architecture=x86_64) Oct 5 04:22:05 localhost podman[78743]: 2025-10-05 08:22:05.166464307 +0000 UTC m=+0.321528269 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 5 04:22:05 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:22:05 localhost podman[78743]: 2025-10-05 08:22:05.251907097 +0000 UTC m=+0.406971099 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15) Oct 5 04:22:05 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:22:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:22:09 localhost podman[78829]: 2025-10-05 08:22:09.92065062 +0000 UTC m=+0.082538575 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, version=17.1.9, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute) Oct 5 04:22:09 localhost podman[78829]: 2025-10-05 08:22:09.977360159 +0000 UTC m=+0.139248104 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:22:09 localhost podman[78829]: unhealthy Oct 5 04:22:09 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:22:09 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 04:22:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:22:19 localhost systemd[1]: tmp-crun.MhzzrT.mount: Deactivated successfully. Oct 5 04:22:19 localhost podman[78853]: 2025-10-05 08:22:19.91348222 +0000 UTC m=+0.082362559 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:22:20 localhost podman[78853]: 2025-10-05 08:22:20.119011908 +0000 UTC m=+0.287892237 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step1, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team) Oct 5 04:22:20 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:22:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:22:30 localhost podman[78882]: 2025-10-05 08:22:30.914045469 +0000 UTC m=+0.083667364 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:22:30 localhost systemd[1]: tmp-crun.paz9Go.mount: Deactivated successfully. Oct 5 04:22:30 localhost podman[78883]: 2025-10-05 08:22:30.966242187 +0000 UTC m=+0.132172423 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git) Oct 5 04:22:30 localhost podman[78882]: 2025-10-05 08:22:30.994101804 +0000 UTC m=+0.163723699 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:22:31 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:22:31 localhost podman[78883]: 2025-10-05 08:22:31.007073872 +0000 UTC m=+0.173004148 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Oct 5 04:22:31 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:22:31 localhost podman[78884]: 2025-10-05 08:22:31.073524563 +0000 UTC m=+0.236162971 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Oct 5 04:22:31 localhost podman[78884]: 2025-10-05 08:22:31.127171371 +0000 UTC m=+0.289809759 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 04:22:31 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:22:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:22:32 localhost podman[78953]: 2025-10-05 08:22:32.911979587 +0000 UTC m=+0.080879900 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_migration_target, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:22:33 localhost podman[78953]: 2025-10-05 08:22:33.290096082 +0000 UTC m=+0.458996324 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git) Oct 5 04:22:33 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:22:35 localhost podman[78976]: 2025-10-05 08:22:35.903900448 +0000 UTC m=+0.076229214 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, release=1, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true) Oct 5 04:22:35 localhost podman[78977]: 2025-10-05 08:22:35.924820968 +0000 UTC m=+0.088348128 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:22:35 localhost podman[78977]: 2025-10-05 08:22:35.95099416 +0000 UTC m=+0.114521350 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, tcib_managed=true, version=17.1.9) Oct 5 04:22:35 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:22:36 localhost podman[78978]: 2025-10-05 08:22:36.014592674 +0000 UTC m=+0.178126905 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:22:36 localhost podman[78976]: 2025-10-05 08:22:36.03683144 +0000 UTC m=+0.209160216 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:22:36 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:22:36 localhost podman[78978]: 2025-10-05 08:22:36.053174348 +0000 UTC m=+0.216708539 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:22:36 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:22:36 localhost podman[78984]: 2025-10-05 08:22:36.13793016 +0000 UTC m=+0.294513285 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, tcib_managed=true, container_name=collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:22:36 localhost podman[78984]: 2025-10-05 08:22:36.176202836 +0000 UTC m=+0.332785941 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, maintainer=OpenStack TripleO Team, release=2, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:22:36 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:22:36 localhost systemd[1]: tmp-crun.GFNLFc.mount: Deactivated successfully. Oct 5 04:22:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:22:40 localhost podman[79151]: 2025-10-05 08:22:40.911547254 +0000 UTC m=+0.080956450 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5) Oct 5 04:22:40 localhost podman[79151]: 2025-10-05 08:22:40.945106663 +0000 UTC m=+0.114515859 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, architecture=x86_64, build-date=2025-07-21T14:48:37, tcib_managed=true, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:22:40 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:22:48 localhost systemd[1]: libpod-464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9.scope: Deactivated successfully. Oct 5 04:22:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:22:48 localhost recover_tripleo_nova_virtqemud[79184]: 63458 Oct 5 04:22:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:22:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:22:48 localhost podman[79177]: 2025-10-05 08:22:48.22944342 +0000 UTC m=+0.057095472 container died 464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, batch=17.1_20250721.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, container_name=nova_wait_for_compute_service, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12) Oct 5 04:22:48 localhost systemd[1]: tmp-crun.PXTxTz.mount: Deactivated successfully. Oct 5 04:22:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9-userdata-shm.mount: Deactivated successfully. Oct 5 04:22:48 localhost systemd[1]: var-lib-containers-storage-overlay-79319d12525dee5bc50b02f4506bc7bc6e833cf5798b23ca8359393e14a5b8e7-merged.mount: Deactivated successfully. Oct 5 04:22:48 localhost podman[79177]: 2025-10-05 08:22:48.269884633 +0000 UTC m=+0.097536645 container cleanup 464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 04:22:48 localhost systemd[1]: libpod-conmon-464377703bfa82952c45f1b5a3d6894272f107cb00e14fef0c087d0d7d4812a9.scope: Deactivated successfully. Oct 5 04:22:48 localhost python3[77275]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=5d5b173631792e25c080b07e9b3e041b --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Oct 5 04:22:48 localhost python3[79233]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:49 localhost python3[79249]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 04:22:49 localhost python3[79310]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1759652569.133025-119696-195085542562455/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:50 localhost python3[79326]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 04:22:50 localhost systemd[1]: Reloading. Oct 5 04:22:50 localhost systemd-rc-local-generator[79348]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:22:50 localhost systemd-sysv-generator[79352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:22:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:22:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:22:50 localhost podman[79363]: 2025-10-05 08:22:50.592053093 +0000 UTC m=+0.089412938 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:07:59, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:22:50 localhost podman[79363]: 2025-10-05 08:22:50.804324912 +0000 UTC m=+0.301684757 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9) Oct 5 04:22:50 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:22:51 localhost python3[79408]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 04:22:51 localhost systemd[1]: Reloading. Oct 5 04:22:51 localhost systemd-sysv-generator[79441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:22:51 localhost systemd-rc-local-generator[79435]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:22:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:22:51 localhost systemd[1]: Starting nova_compute container... Oct 5 04:22:51 localhost tripleo-start-podman-container[79448]: Creating additional drop-in dependency for "nova_compute" (700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef) Oct 5 04:22:51 localhost systemd[1]: Reloading. Oct 5 04:22:51 localhost systemd-sysv-generator[79508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 04:22:51 localhost systemd-rc-local-generator[79503]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 04:22:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 04:22:52 localhost systemd[1]: Started nova_compute container. Oct 5 04:22:52 localhost systemd[1]: Starting dnf makecache... Oct 5 04:22:52 localhost dnf[79516]: Updating Subscription Management repositories. Oct 5 04:22:52 localhost python3[79546]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:53 localhost dnf[79516]: Metadata cache refreshed recently. Oct 5 04:22:54 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 5 04:22:54 localhost systemd[1]: Finished dnf makecache. Oct 5 04:22:54 localhost systemd[1]: dnf-makecache.service: Consumed 1.830s CPU time. Oct 5 04:22:54 localhost python3[79667]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005471152 step=5 update_config_hash_only=False Oct 5 04:22:54 localhost python3[79684]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 04:22:55 localhost python3[79700]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Oct 5 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:23:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:23:01 localhost systemd[1]: tmp-crun.Z2mMlI.mount: Deactivated successfully. Oct 5 04:23:01 localhost podman[79701]: 2025-10-05 08:23:01.984253887 +0000 UTC m=+0.148132621 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, build-date=2025-07-21T14:45:33, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:23:01 localhost podman[79702]: 2025-10-05 08:23:01.938865501 +0000 UTC m=+0.103653179 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1) Oct 5 04:23:02 localhost podman[79702]: 2025-10-05 08:23:02.023214202 +0000 UTC m=+0.188001870 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, version=17.1.9, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:23:02 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:23:02 localhost podman[79701]: 2025-10-05 08:23:02.036282201 +0000 UTC m=+0.200160995 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute) Oct 5 04:23:02 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:23:02 localhost podman[79703]: 2025-10-05 08:23:02.125965425 +0000 UTC m=+0.288320319 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 04:23:02 localhost podman[79703]: 2025-10-05 08:23:02.180104317 +0000 UTC m=+0.342459201 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:23:02 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:23:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:23:03 localhost podman[79835]: 2025-10-05 08:23:03.918287313 +0000 UTC m=+0.084328932 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute) Oct 5 04:23:04 localhost podman[79835]: 2025-10-05 08:23:04.291873426 +0000 UTC m=+0.457915055 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1, container_name=nova_migration_target, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:23:04 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:23:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:23:06 localhost systemd[1]: tmp-crun.NIPzmn.mount: Deactivated successfully. Oct 5 04:23:06 localhost podman[79872]: 2025-10-05 08:23:06.936009154 +0000 UTC m=+0.099031895 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1) Oct 5 04:23:06 localhost systemd[1]: tmp-crun.6sePYH.mount: Deactivated successfully. Oct 5 04:23:06 localhost podman[79873]: 2025-10-05 08:23:06.983937179 +0000 UTC m=+0.143111117 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team) Oct 5 04:23:06 localhost podman[79873]: 2025-10-05 08:23:06.994953254 +0000 UTC m=+0.154127242 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-iscsid, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:23:07 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:23:07 localhost podman[79872]: 2025-10-05 08:23:07.03581965 +0000 UTC m=+0.198842391 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:23:07 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:23:07 localhost podman[79879]: 2025-10-05 08:23:07.039962841 +0000 UTC m=+0.195755587 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03) Oct 5 04:23:07 localhost podman[79871]: 2025-10-05 08:23:07.113459401 +0000 UTC m=+0.279980095 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:23:07 localhost podman[79879]: 2025-10-05 08:23:07.169576165 +0000 UTC m=+0.325368931 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, release=2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-collectd) Oct 5 04:23:07 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:23:07 localhost podman[79871]: 2025-10-05 08:23:07.18131802 +0000 UTC m=+0.347838714 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Oct 5 04:23:07 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:23:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:23:11 localhost podman[79955]: 2025-10-05 08:23:11.916714108 +0000 UTC m=+0.084040744 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, container_name=nova_compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 5 04:23:11 localhost podman[79955]: 2025-10-05 08:23:11.973295214 +0000 UTC m=+0.140621820 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:23:11 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:23:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:23:21 localhost podman[79980]: 2025-10-05 08:23:21.91893115 +0000 UTC m=+0.082069141 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:07:59, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:23:22 localhost podman[79980]: 2025-10-05 08:23:22.125671252 +0000 UTC m=+0.288809223 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:23:22 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:23:25 localhost sshd[80010]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:23:25 localhost systemd-logind[760]: New session 33 of user zuul. Oct 5 04:23:25 localhost systemd[1]: Started Session 33 of User zuul. Oct 5 04:23:26 localhost python3[80119]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:23:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:23:32 localhost systemd[1]: tmp-crun.O4SJns.mount: Deactivated successfully. Oct 5 04:23:32 localhost podman[80306]: 2025-10-05 08:23:32.980237706 +0000 UTC m=+0.139488940 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Oct 5 04:23:33 localhost podman[80307]: 2025-10-05 08:23:33.029482906 +0000 UTC m=+0.188320508 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, release=1, build-date=2025-07-21T13:07:52, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, container_name=logrotate_crond) Oct 5 04:23:33 localhost podman[80306]: 2025-10-05 08:23:33.035184319 +0000 UTC m=+0.194435553 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, release=1, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:23:33 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:23:33 localhost podman[80308]: 2025-10-05 08:23:32.948518106 +0000 UTC m=+0.105513599 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:23:33 localhost podman[80307]: 2025-10-05 08:23:33.066441577 +0000 UTC m=+0.225279179 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20250721.1, release=1, distribution-scope=public) Oct 5 04:23:33 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:23:33 localhost podman[80308]: 2025-10-05 08:23:33.083220797 +0000 UTC m=+0.240216370 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, release=1, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:23:33 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:23:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:23:34 localhost systemd[1]: tmp-crun.tLusK9.mount: Deactivated successfully. Oct 5 04:23:34 localhost podman[80452]: 2025-10-05 08:23:34.792392966 +0000 UTC m=+0.095332516 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, release=1, container_name=nova_migration_target) Oct 5 04:23:34 localhost python3[80451]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Oct 5 04:23:35 localhost podman[80452]: 2025-10-05 08:23:35.151306646 +0000 UTC m=+0.454246166 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:23:35 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:23:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:23:37 localhost podman[80491]: 2025-10-05 08:23:37.918856762 +0000 UTC m=+0.086640093 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:23:37 localhost systemd[1]: tmp-crun.cFhhe8.mount: Deactivated successfully. Oct 5 04:23:37 localhost podman[80492]: 2025-10-05 08:23:37.942693042 +0000 UTC m=+0.104293238 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:23:37 localhost podman[80490]: 2025-10-05 08:23:37.952894844 +0000 UTC m=+0.123780138 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:23:38 localhost podman[80492]: 2025-10-05 08:23:38.004981011 +0000 UTC m=+0.166581247 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3) Oct 5 04:23:38 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:23:38 localhost podman[80496]: 2025-10-05 08:23:38.020030444 +0000 UTC m=+0.181357111 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, version=17.1.9, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, architecture=x86_64) Oct 5 04:23:38 localhost podman[80496]: 2025-10-05 08:23:38.031980535 +0000 UTC m=+0.193307222 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, release=2) Oct 5 04:23:38 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:23:38 localhost podman[80491]: 2025-10-05 08:23:38.044557701 +0000 UTC m=+0.212341032 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:23:38 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:23:38 localhost podman[80490]: 2025-10-05 08:23:38.096059271 +0000 UTC m=+0.266944605 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T16:28:53, version=17.1.9, release=1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:23:38 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:23:39 localhost python3[80671]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Oct 5 04:23:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Oct 5 04:23:39 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Oct 5 04:23:39 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 04:23:39 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 04:23:39 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 04:23:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:23:42 localhost podman[80715]: 2025-10-05 08:23:42.906857823 +0000 UTC m=+0.078969118 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.9, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:23:42 localhost podman[80715]: 2025-10-05 08:23:42.939133137 +0000 UTC m=+0.111244422 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step5, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:23:42 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:23:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:23:52 localhost podman[80742]: 2025-10-05 08:23:52.897549196 +0000 UTC m=+0.065063545 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd) Oct 5 04:23:53 localhost podman[80742]: 2025-10-05 08:23:53.088135424 +0000 UTC m=+0.255649803 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, distribution-scope=public) Oct 5 04:23:53 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:24:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:24:03 localhost podman[80771]: 2025-10-05 08:24:03.920530686 +0000 UTC m=+0.088708499 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:24:03 localhost podman[80773]: 2025-10-05 08:24:03.978012686 +0000 UTC m=+0.140580099 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:24:04 localhost podman[80771]: 2025-10-05 08:24:04.031275864 +0000 UTC m=+0.199453717 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1) Oct 5 04:24:04 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:24:04 localhost podman[80773]: 2025-10-05 08:24:04.060020535 +0000 UTC m=+0.222587988 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:24:04 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:24:04 localhost podman[80772]: 2025-10-05 08:24:04.033465492 +0000 UTC m=+0.198913972 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-cron-container) Oct 5 04:24:04 localhost podman[80772]: 2025-10-05 08:24:04.11733392 +0000 UTC m=+0.282782420 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, container_name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:24:04 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:24:04 localhost systemd[1]: tmp-crun.BYY4ZG.mount: Deactivated successfully. Oct 5 04:24:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:24:05 localhost podman[80921]: 2025-10-05 08:24:05.651709415 +0000 UTC m=+0.085203535 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, distribution-scope=public, release=1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=) Oct 5 04:24:05 localhost podman[80921]: 2025-10-05 08:24:05.985576514 +0000 UTC m=+0.419070624 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container) Oct 5 04:24:05 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:24:08 localhost podman[80944]: 2025-10-05 08:24:08.923535629 +0000 UTC m=+0.087029195 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:24:08 localhost podman[80945]: 2025-10-05 08:24:08.981870442 +0000 UTC m=+0.141308769 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1) Oct 5 04:24:08 localhost podman[80944]: 2025-10-05 08:24:08.99227848 +0000 UTC m=+0.155772016 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:09 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:24:09 localhost podman[80946]: 2025-10-05 08:24:09.032875669 +0000 UTC m=+0.189020478 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc.) Oct 5 04:24:09 localhost podman[80945]: 2025-10-05 08:24:09.039029034 +0000 UTC m=+0.198467291 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T13:28:44, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:24:09 localhost podman[80946]: 2025-10-05 08:24:09.045181119 +0000 UTC m=+0.201325888 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:24:09 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:24:09 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:24:09 localhost podman[80947]: 2025-10-05 08:24:09.082291853 +0000 UTC m=+0.234998629 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, vcs-type=git, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:24:09 localhost podman[80947]: 2025-10-05 08:24:09.120366734 +0000 UTC m=+0.273073560 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, config_id=tripleo_step3, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, release=2, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, container_name=collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Oct 5 04:24:09 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:24:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:24:13 localhost systemd[1]: tmp-crun.EZK8nQ.mount: Deactivated successfully. Oct 5 04:24:13 localhost podman[81026]: 2025-10-05 08:24:13.916231944 +0000 UTC m=+0.085015590 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_id=tripleo_step5) Oct 5 04:24:13 localhost podman[81026]: 2025-10-05 08:24:13.943085684 +0000 UTC m=+0.111869330 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, release=1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, version=17.1.9) Oct 5 04:24:13 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:24:21 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:24:21 localhost recover_tripleo_nova_virtqemud[81053]: 63458 Oct 5 04:24:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:24:21 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:24:23 localhost systemd[1]: tmp-crun.658y6w.mount: Deactivated successfully. Oct 5 04:24:23 localhost podman[81054]: 2025-10-05 08:24:23.933640802 +0000 UTC m=+0.100579837 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, release=1, architecture=x86_64, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, batch=17.1_20250721.1, container_name=metrics_qdr, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:24:24 localhost podman[81054]: 2025-10-05 08:24:24.124886737 +0000 UTC m=+0.291825812 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, version=17.1.9, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:24:24 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:24:34 localhost systemd[1]: tmp-crun.7JO2Mv.mount: Deactivated successfully. Oct 5 04:24:34 localhost podman[81084]: 2025-10-05 08:24:34.914194555 +0000 UTC m=+0.080929750 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 5 04:24:34 localhost podman[81084]: 2025-10-05 08:24:34.949139751 +0000 UTC m=+0.115874956 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-cron) Oct 5 04:24:34 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:24:34 localhost podman[81083]: 2025-10-05 08:24:34.971754147 +0000 UTC m=+0.137716761 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-type=git) Oct 5 04:24:34 localhost podman[81085]: 2025-10-05 08:24:34.938053624 +0000 UTC m=+0.098784088 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:24:35 localhost podman[81085]: 2025-10-05 08:24:35.017616706 +0000 UTC m=+0.178347120 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, release=1, version=17.1.9) Oct 5 04:24:35 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:24:35 localhost podman[81083]: 2025-10-05 08:24:35.030247395 +0000 UTC m=+0.196210059 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, release=1, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, version=17.1.9) Oct 5 04:24:35 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:24:35 localhost systemd[1]: tmp-crun.ahT74w.mount: Deactivated successfully. Oct 5 04:24:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:24:36 localhost podman[81154]: 2025-10-05 08:24:36.898652142 +0000 UTC m=+0.067136650 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1) Oct 5 04:24:37 localhost podman[81154]: 2025-10-05 08:24:37.28135902 +0000 UTC m=+0.449843588 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1) Oct 5 04:24:37 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:24:38 localhost systemd[1]: session-33.scope: Deactivated successfully. Oct 5 04:24:38 localhost systemd[1]: session-33.scope: Consumed 5.708s CPU time. Oct 5 04:24:38 localhost systemd-logind[760]: Session 33 logged out. Waiting for processes to exit. Oct 5 04:24:38 localhost systemd-logind[760]: Removed session 33. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:24:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:24:39 localhost podman[81202]: 2025-10-05 08:24:39.921670976 +0000 UTC m=+0.079011429 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:24:39 localhost podman[81202]: 2025-10-05 08:24:39.95916116 +0000 UTC m=+0.116501603 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid) Oct 5 04:24:39 localhost systemd[1]: tmp-crun.acreUS.mount: Deactivated successfully. Oct 5 04:24:39 localhost podman[81201]: 2025-10-05 08:24:39.983880703 +0000 UTC m=+0.141303468 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, release=1, build-date=2025-07-21T13:28:44, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1) Oct 5 04:24:39 localhost podman[81203]: 2025-10-05 08:24:39.996990375 +0000 UTC m=+0.148064330 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:24:40 localhost podman[81203]: 2025-10-05 08:24:40.005630626 +0000 UTC m=+0.156704611 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=2, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:04:03) Oct 5 04:24:40 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:24:40 localhost podman[81201]: 2025-10-05 08:24:40.05538438 +0000 UTC m=+0.212807145 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1, version=17.1.9, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:24:40 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:24:40 localhost podman[81200]: 2025-10-05 08:24:40.133150134 +0000 UTC m=+0.292813339 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=) Oct 5 04:24:40 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:24:40 localhost podman[81200]: 2025-10-05 08:24:40.204722012 +0000 UTC m=+0.364385227 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12) Oct 5 04:24:40 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:24:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:24:44 localhost podman[81307]: 2025-10-05 08:24:44.907754696 +0000 UTC m=+0.080412067 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:24:44 localhost podman[81307]: 2025-10-05 08:24:44.958482535 +0000 UTC m=+0.131139886 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=) Oct 5 04:24:44 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:24:51 localhost sshd[81333]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:24:51 localhost systemd-logind[760]: New session 34 of user zuul. Oct 5 04:24:51 localhost systemd[1]: Started Session 34 of User zuul. Oct 5 04:24:52 localhost python3[81352]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 04:24:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:24:54 localhost systemd[1]: tmp-crun.swc5Ck.mount: Deactivated successfully. Oct 5 04:24:54 localhost podman[81354]: 2025-10-05 08:24:54.915925978 +0000 UTC m=+0.085706669 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 5 04:24:55 localhost podman[81354]: 2025-10-05 08:24:55.125199026 +0000 UTC m=+0.294979737 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:24:55 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:25:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:25:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4363 writes, 20K keys, 4363 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4363 writes, 480 syncs, 9.09 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:25:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:25:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 5237 writes, 23K keys, 5237 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5237 writes, 572 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:25:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:25:05 localhost systemd[1]: tmp-crun.rWQOnM.mount: Deactivated successfully. Oct 5 04:25:05 localhost podman[81400]: 2025-10-05 08:25:05.801285288 +0000 UTC m=+0.101266735 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, release=1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, name=rhosp17/openstack-cron) Oct 5 04:25:05 localhost podman[81400]: 2025-10-05 08:25:05.842213555 +0000 UTC m=+0.142195062 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-cron, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52) Oct 5 04:25:05 localhost podman[81401]: 2025-10-05 08:25:05.84165476 +0000 UTC m=+0.138660357 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, version=17.1.9, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:25:05 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:25:05 localhost podman[81399]: 2025-10-05 08:25:05.905136641 +0000 UTC m=+0.206783883 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=) Oct 5 04:25:05 localhost podman[81401]: 2025-10-05 08:25:05.925160718 +0000 UTC m=+0.222166355 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:25:05 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:25:05 localhost podman[81399]: 2025-10-05 08:25:05.966229599 +0000 UTC m=+0.267876851 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:25:05 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:25:06 localhost systemd[1]: tmp-crun.08DZvp.mount: Deactivated successfully. Oct 5 04:25:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:25:07 localhost podman[81516]: 2025-10-05 08:25:07.911549538 +0000 UTC m=+0.082086412 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, name=rhosp17/openstack-nova-compute, version=17.1.9) Oct 5 04:25:08 localhost podman[81516]: 2025-10-05 08:25:08.280842995 +0000 UTC m=+0.451379909 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:25:08 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:25:10 localhost podman[81555]: 2025-10-05 08:25:10.917452752 +0000 UTC m=+0.081055573 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, container_name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9) Oct 5 04:25:10 localhost podman[81555]: 2025-10-05 08:25:10.929068353 +0000 UTC m=+0.092671134 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:25:10 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:25:10 localhost podman[81556]: 2025-10-05 08:25:10.974537072 +0000 UTC m=+0.135102292 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, release=2, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vendor=Red Hat, Inc.) Oct 5 04:25:11 localhost podman[81553]: 2025-10-05 08:25:11.024280685 +0000 UTC m=+0.193730203 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:25:11 localhost podman[81556]: 2025-10-05 08:25:11.038019244 +0000 UTC m=+0.198584424 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, container_name=collectd, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:25:11 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:25:11 localhost podman[81553]: 2025-10-05 08:25:11.069135788 +0000 UTC m=+0.238585296 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:25:11 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:25:11 localhost podman[81554]: 2025-10-05 08:25:11.126954137 +0000 UTC m=+0.293378434 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:25:11 localhost podman[81554]: 2025-10-05 08:25:11.15318537 +0000 UTC m=+0.319609657 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, release=1, io.buildah.version=1.33.12) Oct 5 04:25:11 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:25:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:25:15 localhost podman[81638]: 2025-10-05 08:25:15.910811855 +0000 UTC m=+0.077120759 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, container_name=nova_compute, io.openshift.expose-services=, architecture=x86_64, version=17.1.9) Oct 5 04:25:15 localhost podman[81638]: 2025-10-05 08:25:15.966198629 +0000 UTC m=+0.132507543 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, version=17.1.9, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:25:15 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:25:20 localhost python3[81679]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Oct 5 04:25:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 04:25:24 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 04:25:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 04:25:24 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 04:25:24 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 04:25:24 localhost systemd[1]: run-r23877fa37f8e4acea32815c7e7048540.service: Deactivated successfully. Oct 5 04:25:24 localhost systemd[1]: run-r2fdde1bcdb904b79acb71b0a181e7a88.service: Deactivated successfully. Oct 5 04:25:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:25:25 localhost systemd[1]: tmp-crun.VMUEIx.mount: Deactivated successfully. Oct 5 04:25:25 localhost podman[81831]: 2025-10-05 08:25:25.911814458 +0000 UTC m=+0.084661970 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:25:26 localhost podman[81831]: 2025-10-05 08:25:26.106259621 +0000 UTC m=+0.279107133 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, release=1, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team) Oct 5 04:25:26 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:25:36 localhost podman[81859]: 2025-10-05 08:25:36.921528249 +0000 UTC m=+0.089640616 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 5 04:25:36 localhost podman[81860]: 2025-10-05 08:25:36.961987703 +0000 UTC m=+0.130255644 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.9) Oct 5 04:25:36 localhost podman[81859]: 2025-10-05 08:25:36.978166994 +0000 UTC m=+0.146279321 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, container_name=ceilometer_agent_compute, tcib_managed=true, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:25:36 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:25:36 localhost podman[81860]: 2025-10-05 08:25:36.998316134 +0000 UTC m=+0.166584035 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, release=1) Oct 5 04:25:37 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:25:37 localhost podman[81861]: 2025-10-05 08:25:37.06491368 +0000 UTC m=+0.232859842 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Oct 5 04:25:37 localhost podman[81861]: 2025-10-05 08:25:37.095203116 +0000 UTC m=+0.263149248 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:25:37 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:25:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:25:38 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:25:38 localhost recover_tripleo_nova_virtqemud[81931]: 63458 Oct 5 04:25:38 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:25:38 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:25:38 localhost systemd[1]: tmp-crun.3kYcyE.mount: Deactivated successfully. Oct 5 04:25:38 localhost podman[81929]: 2025-10-05 08:25:38.93443591 +0000 UTC m=+0.096729430 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37) Oct 5 04:25:39 localhost podman[81929]: 2025-10-05 08:25:39.29231319 +0000 UTC m=+0.454606710 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Oct 5 04:25:39 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:25:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:25:41 localhost podman[82001]: 2025-10-05 08:25:41.909473951 +0000 UTC m=+0.070873845 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, tcib_managed=true, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd) Oct 5 04:25:41 localhost podman[82001]: 2025-10-05 08:25:41.921266812 +0000 UTC m=+0.082666706 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, release=2, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:25:41 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:25:41 localhost podman[81999]: 2025-10-05 08:25:41.963549886 +0000 UTC m=+0.128556468 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, release=1, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:25:41 localhost podman[81998]: 2025-10-05 08:25:41.920964484 +0000 UTC m=+0.084344371 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, release=1, distribution-scope=public, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=) Oct 5 04:25:41 localhost podman[81999]: 2025-10-05 08:25:41.980282842 +0000 UTC m=+0.145289384 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, container_name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 04:25:41 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:25:42 localhost podman[81998]: 2025-10-05 08:25:42.00001185 +0000 UTC m=+0.163391727 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53) Oct 5 04:25:42 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:25:42 localhost podman[82000]: 2025-10-05 08:25:42.073894125 +0000 UTC m=+0.237316783 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:25:42 localhost podman[82000]: 2025-10-05 08:25:42.085140511 +0000 UTC m=+0.248563159 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid) Oct 5 04:25:42 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:25:42 localhost systemd[1]: tmp-crun.nNCjSf.mount: Deactivated successfully. Oct 5 04:25:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:25:46 localhost systemd[1]: tmp-crun.wZrhFA.mount: Deactivated successfully. Oct 5 04:25:46 localhost podman[82085]: 2025-10-05 08:25:46.919622348 +0000 UTC m=+0.086870820 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:25:46 localhost podman[82085]: 2025-10-05 08:25:46.945283878 +0000 UTC m=+0.112532340 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.buildah.version=1.33.12, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, release=1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:25:46 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:25:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:25:56 localhost systemd[1]: tmp-crun.c0Yp3T.mount: Deactivated successfully. Oct 5 04:25:56 localhost podman[82111]: 2025-10-05 08:25:56.908055015 +0000 UTC m=+0.079551662 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1) Oct 5 04:25:57 localhost podman[82111]: 2025-10-05 08:25:57.120348714 +0000 UTC m=+0.291845351 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:25:57 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:26:07 localhost podman[82139]: 2025-10-05 08:26:07.900060382 +0000 UTC m=+0.072539900 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, release=1, config_id=tripleo_step4) Oct 5 04:26:07 localhost podman[82139]: 2025-10-05 08:26:07.924265942 +0000 UTC m=+0.096745460 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9) Oct 5 04:26:07 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:26:07 localhost systemd[1]: tmp-crun.OOjOZP.mount: Deactivated successfully. Oct 5 04:26:07 localhost podman[82141]: 2025-10-05 08:26:07.977631688 +0000 UTC m=+0.141462059 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, release=1, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:26:08 localhost podman[82141]: 2025-10-05 08:26:08.029891283 +0000 UTC m=+0.193721674 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:26:08 localhost podman[82140]: 2025-10-05 08:26:08.029214144 +0000 UTC m=+0.195320587 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:52, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:26:08 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:26:08 localhost podman[82140]: 2025-10-05 08:26:08.111390165 +0000 UTC m=+0.277496618 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:26:08 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:26:08 localhost systemd[1]: tmp-crun.QIaunc.mount: Deactivated successfully. Oct 5 04:26:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:26:09 localhost podman[82223]: 2025-10-05 08:26:09.924928958 +0000 UTC m=+0.084764683 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:26:10 localhost podman[82223]: 2025-10-05 08:26:10.315883701 +0000 UTC m=+0.475719416 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_migration_target) Oct 5 04:26:10 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:26:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:26:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:26:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:26:12 localhost podman[82363]: 2025-10-05 08:26:12.157091968 +0000 UTC m=+0.113711893 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=2, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-collectd, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:26:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:26:12 localhost podman[82363]: 2025-10-05 08:26:12.204973794 +0000 UTC m=+0.161593709 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3) Oct 5 04:26:12 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:26:12 localhost systemd[1]: tmp-crun.ZITuBk.mount: Deactivated successfully. Oct 5 04:26:12 localhost podman[82362]: 2025-10-05 08:26:12.229475963 +0000 UTC m=+0.187791483 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:26:12 localhost podman[82362]: 2025-10-05 08:26:12.248333527 +0000 UTC m=+0.206649037 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.9, container_name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 04:26:12 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:26:12 localhost podman[82401]: 2025-10-05 08:26:12.314911112 +0000 UTC m=+0.126464930 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9) Oct 5 04:26:12 localhost podman[82401]: 2025-10-05 08:26:12.321800731 +0000 UTC m=+0.133354519 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:26:12 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:26:12 localhost podman[82364]: 2025-10-05 08:26:12.36356655 +0000 UTC m=+0.312997938 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:26:12 localhost podman[82364]: 2025-10-05 08:26:12.40499685 +0000 UTC m=+0.354428298 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true) Oct 5 04:26:12 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:26:13 localhost systemd[1]: tmp-crun.Tv9N4v.mount: Deactivated successfully. Oct 5 04:26:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:26:17 localhost podman[82444]: 2025-10-05 08:26:17.910498797 +0000 UTC m=+0.074212616 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_compute, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:26:17 localhost podman[82444]: 2025-10-05 08:26:17.938655235 +0000 UTC m=+0.102369104 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true) Oct 5 04:26:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:26:25 localhost systemd[1]: session-34.scope: Deactivated successfully. Oct 5 04:26:25 localhost systemd[1]: session-34.scope: Consumed 7.135s CPU time. Oct 5 04:26:25 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Oct 5 04:26:25 localhost systemd-logind[760]: Removed session 34. Oct 5 04:26:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:26:27 localhost podman[82470]: 2025-10-05 08:26:27.915858824 +0000 UTC m=+0.084303871 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59) Oct 5 04:26:28 localhost podman[82470]: 2025-10-05 08:26:28.109464115 +0000 UTC m=+0.277909222 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, release=1, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:26:28 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:26:38 localhost podman[82500]: 2025-10-05 08:26:38.923336715 +0000 UTC m=+0.088636499 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, container_name=logrotate_crond, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:26:38 localhost podman[82500]: 2025-10-05 08:26:38.964959751 +0000 UTC m=+0.130259555 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1) Oct 5 04:26:38 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:26:39 localhost podman[82501]: 2025-10-05 08:26:38.968290881 +0000 UTC m=+0.131075416 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:26:39 localhost podman[82499]: 2025-10-05 08:26:39.028928144 +0000 UTC m=+0.195244485 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_compute, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:26:39 localhost podman[82501]: 2025-10-05 08:26:39.053253868 +0000 UTC m=+0.216038373 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, release=1) Oct 5 04:26:39 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:26:39 localhost podman[82499]: 2025-10-05 08:26:39.081739245 +0000 UTC m=+0.248055636 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, vcs-type=git) Oct 5 04:26:39 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:26:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:26:40 localhost systemd[1]: tmp-crun.dXmG5Y.mount: Deactivated successfully. Oct 5 04:26:40 localhost podman[82586]: 2025-10-05 08:26:40.911984654 +0000 UTC m=+0.078905884 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 5 04:26:41 localhost podman[82586]: 2025-10-05 08:26:41.280046782 +0000 UTC m=+0.446968012 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, container_name=nova_migration_target, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-07-21T14:48:37) Oct 5 04:26:41 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:26:42 localhost sshd[82631]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:26:42 localhost systemd-logind[760]: New session 35 of user zuul. Oct 5 04:26:42 localhost systemd[1]: Started Session 35 of User zuul. Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:26:42 localhost podman[82635]: 2025-10-05 08:26:42.399952347 +0000 UTC m=+0.121522256 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, io.openshift.expose-services=) Oct 5 04:26:42 localhost podman[82635]: 2025-10-05 08:26:42.419410727 +0000 UTC m=+0.140980646 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, container_name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:26:42 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:26:42 localhost podman[82634]: 2025-10-05 08:26:42.372352854 +0000 UTC m=+0.096704559 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, release=2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, version=17.1.9, container_name=collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Oct 5 04:26:42 localhost podman[82679]: 2025-10-05 08:26:42.476177395 +0000 UTC m=+0.082534842 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, build-date=2025-07-21T13:27:15, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 5 04:26:42 localhost podman[82702]: 2025-10-05 08:26:42.533504919 +0000 UTC m=+0.086652765 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:26:42 localhost podman[82634]: 2025-10-05 08:26:42.554630215 +0000 UTC m=+0.278981930 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, release=2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Oct 5 04:26:42 localhost podman[82679]: 2025-10-05 08:26:42.56107045 +0000 UTC m=+0.167427867 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9) Oct 5 04:26:42 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:26:42 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:26:42 localhost podman[82702]: 2025-10-05 08:26:42.579261007 +0000 UTC m=+0.132408923 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, release=1, tcib_managed=true, vcs-type=git, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible) Oct 5 04:26:42 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:26:42 localhost python3[82701]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 04:26:45 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:26:45 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:26:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:26:48 localhost systemd[1]: tmp-crun.OIsCP5.mount: Deactivated successfully. Oct 5 04:26:48 localhost podman[82864]: 2025-10-05 08:26:48.927405127 +0000 UTC m=+0.093482401 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:26:48 localhost podman[82864]: 2025-10-05 08:26:48.95466095 +0000 UTC m=+0.120738214 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step5, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:26:48 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:26:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:26:58 localhost podman[82950]: 2025-10-05 08:26:58.920298043 +0000 UTC m=+0.088610748 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:26:59 localhost podman[82950]: 2025-10-05 08:26:59.109631106 +0000 UTC m=+0.277943821 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:26:59 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:27:01 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:27:01 localhost recover_tripleo_nova_virtqemud[82979]: 63458 Oct 5 04:27:01 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:27:01 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:27:09 localhost podman[82982]: 2025-10-05 08:27:09.927213117 +0000 UTC m=+0.087131627 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1) Oct 5 04:27:09 localhost podman[82981]: 2025-10-05 08:27:09.981374134 +0000 UTC m=+0.141870670 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, version=17.1.9, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, release=1) Oct 5 04:27:10 localhost podman[82982]: 2025-10-05 08:27:10.057014306 +0000 UTC m=+0.216932806 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T15:29:47, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:27:10 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:27:10 localhost podman[82980]: 2025-10-05 08:27:10.079197752 +0000 UTC m=+0.242109914 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64) Oct 5 04:27:10 localhost podman[82981]: 2025-10-05 08:27:10.102714004 +0000 UTC m=+0.263210590 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:10 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:27:10 localhost podman[82980]: 2025-10-05 08:27:10.140373611 +0000 UTC m=+0.303285733 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Oct 5 04:27:10 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:27:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:27:11 localhost podman[83052]: 2025-10-05 08:27:11.917869871 +0000 UTC m=+0.083204270 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=nova_migration_target, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:27:12 localhost podman[83052]: 2025-10-05 08:27:12.352154096 +0000 UTC m=+0.517488455 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=nova_migration_target, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public) Oct 5 04:27:12 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:27:12 localhost systemd[1]: tmp-crun.eBgAKx.mount: Deactivated successfully. Oct 5 04:27:12 localhost podman[83124]: 2025-10-05 08:27:12.935401413 +0000 UTC m=+0.098717473 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, release=1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:27:12 localhost podman[83126]: 2025-10-05 08:27:12.948699306 +0000 UTC m=+0.098886868 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:27:12 localhost podman[83126]: 2025-10-05 08:27:12.987130194 +0000 UTC m=+0.137317786 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, version=17.1.9, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3) Oct 5 04:27:12 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:27:13 localhost podman[83124]: 2025-10-05 08:27:13.00529562 +0000 UTC m=+0.168611690 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53) Oct 5 04:27:13 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:27:13 localhost podman[83125]: 2025-10-05 08:27:12.993590671 +0000 UTC m=+0.156888760 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:13 localhost podman[83125]: 2025-10-05 08:27:13.076572213 +0000 UTC m=+0.239870272 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9) Oct 5 04:27:13 localhost podman[83127]: 2025-10-05 08:27:13.088315913 +0000 UTC m=+0.244683284 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:27:13 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:27:13 localhost podman[83127]: 2025-10-05 08:27:13.097969567 +0000 UTC m=+0.254336918 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, release=2, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:27:13 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:27:19 localhost podman[83234]: 2025-10-05 08:27:19.920198108 +0000 UTC m=+0.084942888 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step5, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:27:19 localhost podman[83234]: 2025-10-05 08:27:19.955214463 +0000 UTC m=+0.119959253 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:27:19 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:27:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:27:29 localhost podman[83260]: 2025-10-05 08:27:29.884220288 +0000 UTC m=+0.060526753 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Oct 5 04:27:30 localhost podman[83260]: 2025-10-05 08:27:30.079171925 +0000 UTC m=+0.255478450 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git) Oct 5 04:27:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:27:39 localhost python3[83305]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:27:40 localhost podman[83310]: 2025-10-05 08:27:40.930209078 +0000 UTC m=+0.089638426 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:27:40 localhost podman[83310]: 2025-10-05 08:27:40.967122205 +0000 UTC m=+0.126551633 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 5 04:27:40 localhost podman[83311]: 2025-10-05 08:27:40.974198618 +0000 UTC m=+0.129784891 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public) Oct 5 04:27:40 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:27:40 localhost podman[83311]: 2025-10-05 08:27:40.998077199 +0000 UTC m=+0.153663492 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1) Oct 5 04:27:41 localhost podman[83309]: 2025-10-05 08:27:41.036860147 +0000 UTC m=+0.197117507 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:41 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:27:41 localhost podman[83309]: 2025-10-05 08:27:41.094250353 +0000 UTC m=+0.254507683 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:27:41 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:27:41 localhost systemd[1]: tmp-crun.7kicxY.mount: Deactivated successfully. Oct 5 04:27:42 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:27:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:27:42 localhost rhsm-service[6474]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Oct 5 04:27:42 localhost systemd[1]: tmp-crun.ABgFrv.mount: Deactivated successfully. Oct 5 04:27:42 localhost podman[83520]: 2025-10-05 08:27:42.955930178 +0000 UTC m=+0.095304421 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:27:43 localhost podman[83520]: 2025-10-05 08:27:43.330088843 +0000 UTC m=+0.469463156 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:27:43 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:27:43 localhost systemd[1]: tmp-crun.yN1yAN.mount: Deactivated successfully. Oct 5 04:27:43 localhost podman[83567]: 2025-10-05 08:27:43.426375429 +0000 UTC m=+0.094575201 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64) Oct 5 04:27:43 localhost podman[83568]: 2025-10-05 08:27:43.472944059 +0000 UTC m=+0.139135976 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:27:43 localhost podman[83566]: 2025-10-05 08:27:43.435365794 +0000 UTC m=+0.104376818 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.9, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:27:43 localhost podman[83567]: 2025-10-05 08:27:43.490808536 +0000 UTC m=+0.159008278 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:27:43 localhost podman[83568]: 2025-10-05 08:27:43.503364909 +0000 UTC m=+0.169556836 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, tcib_managed=true, container_name=collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-type=git) Oct 5 04:27:43 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:27:43 localhost podman[83566]: 2025-10-05 08:27:43.518159122 +0000 UTC m=+0.187170126 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, release=1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Oct 5 04:27:43 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:27:43 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:27:43 localhost podman[83565]: 2025-10-05 08:27:43.586055304 +0000 UTC m=+0.257607467 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 5 04:27:43 localhost podman[83565]: 2025-10-05 08:27:43.628162972 +0000 UTC m=+0.299715175 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9) Oct 5 04:27:43 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:27:50 localhost podman[83713]: 2025-10-05 08:27:50.912201748 +0000 UTC m=+0.079687324 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, io.openshift.expose-services=) Oct 5 04:27:50 localhost podman[83713]: 2025-10-05 08:27:50.938322601 +0000 UTC m=+0.105808157 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, container_name=nova_compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:27:50 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:28:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:28:00 localhost podman[83738]: 2025-10-05 08:28:00.91179848 +0000 UTC m=+0.083076537 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 5 04:28:01 localhost podman[83738]: 2025-10-05 08:28:01.11930354 +0000 UTC m=+0.290581577 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 5 04:28:01 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:28:01 localhost python3[83780]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Oct 5 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:28:11 localhost systemd[1]: tmp-crun.nra0ie.mount: Deactivated successfully. Oct 5 04:28:11 localhost podman[83782]: 2025-10-05 08:28:11.947012626 +0000 UTC m=+0.100444091 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, release=1) Oct 5 04:28:11 localhost podman[83782]: 2025-10-05 08:28:11.958098278 +0000 UTC m=+0.111529773 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:07:52) Oct 5 04:28:11 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:28:11 localhost podman[83781]: 2025-10-05 08:28:11.919707171 +0000 UTC m=+0.082761468 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.9, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc.) Oct 5 04:28:12 localhost podman[83781]: 2025-10-05 08:28:12.004191085 +0000 UTC m=+0.167245332 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, release=1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:12 localhost podman[83783]: 2025-10-05 08:28:11.959062764 +0000 UTC m=+0.109907538 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:12 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:28:12 localhost podman[83783]: 2025-10-05 08:28:12.043455656 +0000 UTC m=+0.194300440 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:28:12 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:28:12 localhost systemd[1]: tmp-crun.td3llk.mount: Deactivated successfully. Oct 5 04:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:28:13 localhost podman[83866]: 2025-10-05 08:28:13.87426587 +0000 UTC m=+0.092058802 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:28:13 localhost podman[83868]: 2025-10-05 08:28:13.892162617 +0000 UTC m=+0.101051667 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:28:13 localhost systemd[1]: tmp-crun.6JgDmm.mount: Deactivated successfully. Oct 5 04:28:13 localhost podman[83868]: 2025-10-05 08:28:13.902176931 +0000 UTC m=+0.111065931 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, distribution-scope=public, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:28:13 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:28:13 localhost podman[83866]: 2025-10-05 08:28:13.949172353 +0000 UTC m=+0.166965385 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:28:13 localhost systemd[1]: tmp-crun.sqVKzy.mount: Deactivated successfully. Oct 5 04:28:13 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:28:13 localhost podman[83869]: 2025-10-05 08:28:13.985077521 +0000 UTC m=+0.193893739 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T14:48:37, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Oct 5 04:28:14 localhost podman[83867]: 2025-10-05 08:28:14.033882663 +0000 UTC m=+0.248240412 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:28:14 localhost podman[83867]: 2025-10-05 08:28:14.05905989 +0000 UTC m=+0.273417679 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, release=1, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:14 localhost podman[83875]: 2025-10-05 08:28:13.957727116 +0000 UTC m=+0.162496463 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., release=2, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12) Oct 5 04:28:14 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:28:14 localhost podman[83875]: 2025-10-05 08:28:14.141016675 +0000 UTC m=+0.345785982 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, io.openshift.expose-services=, version=17.1.9, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:28:14 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:28:14 localhost podman[83869]: 2025-10-05 08:28:14.389522263 +0000 UTC m=+0.598338541 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4) Oct 5 04:28:14 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:28:14 localhost podman[84051]: 2025-10-05 08:28:14.694387588 +0000 UTC m=+0.079790028 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vendor=Red Hat, Inc., release=553, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, ceph=True, io.openshift.expose-services=) Oct 5 04:28:14 localhost podman[84051]: 2025-10-05 08:28:14.820392855 +0000 UTC m=+0.205795245 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, distribution-scope=public, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_BRANCH=main) Oct 5 04:28:14 localhost systemd[1]: tmp-crun.GL51vY.mount: Deactivated successfully. Oct 5 04:28:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:28:21 localhost systemd[1]: tmp-crun.gARhI9.mount: Deactivated successfully. Oct 5 04:28:21 localhost podman[84193]: 2025-10-05 08:28:21.905101304 +0000 UTC m=+0.077563498 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Oct 5 04:28:21 localhost podman[84193]: 2025-10-05 08:28:21.926074255 +0000 UTC m=+0.098536449 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., container_name=nova_compute) Oct 5 04:28:21 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:28:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:28:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:28:31 localhost recover_tripleo_nova_virtqemud[84221]: 63458 Oct 5 04:28:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:28:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:28:31 localhost podman[84219]: 2025-10-05 08:28:31.912995301 +0000 UTC m=+0.082237323 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:28:32 localhost podman[84219]: 2025-10-05 08:28:32.105168803 +0000 UTC m=+0.274410825 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:32 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:28:42 localhost systemd[1]: tmp-crun.2A4GgD.mount: Deactivated successfully. Oct 5 04:28:42 localhost podman[84252]: 2025-10-05 08:28:42.940856326 +0000 UTC m=+0.101776258 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible) Oct 5 04:28:42 localhost podman[84251]: 2025-10-05 08:28:42.918003523 +0000 UTC m=+0.084558428 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 5 04:28:42 localhost podman[84252]: 2025-10-05 08:28:42.988077214 +0000 UTC m=+0.148997186 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:28:42 localhost podman[84251]: 2025-10-05 08:28:42.99670525 +0000 UTC m=+0.163260135 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1, distribution-scope=public, version=17.1.9, container_name=logrotate_crond) Oct 5 04:28:42 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:28:43 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:28:43 localhost podman[84250]: 2025-10-05 08:28:43.070540483 +0000 UTC m=+0.238122856 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T14:45:33, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:28:43 localhost podman[84250]: 2025-10-05 08:28:43.101197139 +0000 UTC m=+0.268779542 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:28:43 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:28:43 localhost systemd[1]: tmp-crun.Sh7B7u.mount: Deactivated successfully. Oct 5 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:28:44 localhost podman[84375]: 2025-10-05 08:28:44.927826969 +0000 UTC m=+0.084010163 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:28:44 localhost podman[84368]: 2025-10-05 08:28:44.975807797 +0000 UTC m=+0.140090392 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc.) Oct 5 04:28:44 localhost podman[84369]: 2025-10-05 08:28:44.982449598 +0000 UTC m=+0.141292064 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15) Oct 5 04:28:45 localhost podman[84367]: 2025-10-05 08:28:44.907641308 +0000 UTC m=+0.076452975 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53) Oct 5 04:28:45 localhost podman[84369]: 2025-10-05 08:28:45.016069715 +0000 UTC m=+0.174912191 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:28:45 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[84367]: 2025-10-05 08:28:45.039716721 +0000 UTC m=+0.208528368 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-type=git) Oct 5 04:28:45 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[84381]: 2025-10-05 08:28:45.084996695 +0000 UTC m=+0.237674803 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Oct 5 04:28:45 localhost podman[84381]: 2025-10-05 08:28:45.098986197 +0000 UTC m=+0.251664325 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, tcib_managed=true, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:28:45 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[84368]: 2025-10-05 08:28:45.121332126 +0000 UTC m=+0.285614691 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Oct 5 04:28:45 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:28:45 localhost podman[84375]: 2025-10-05 08:28:45.273668031 +0000 UTC m=+0.429851185 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:28:45 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:28:45 localhost systemd[1]: tmp-crun.jSYwDl.mount: Deactivated successfully. Oct 5 04:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:28:52 localhost podman[84472]: 2025-10-05 08:28:52.914957971 +0000 UTC m=+0.082953223 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:28:52 localhost podman[84472]: 2025-10-05 08:28:52.945104673 +0000 UTC m=+0.113099885 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, container_name=nova_compute, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:28:52 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:29:01 localhost systemd[1]: session-35.scope: Deactivated successfully. Oct 5 04:29:01 localhost systemd[1]: session-35.scope: Consumed 11.144s CPU time. Oct 5 04:29:01 localhost systemd-logind[760]: Session 35 logged out. Waiting for processes to exit. Oct 5 04:29:01 localhost systemd-logind[760]: Removed session 35. Oct 5 04:29:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:29:02 localhost systemd[1]: tmp-crun.WSuASJ.mount: Deactivated successfully. Oct 5 04:29:02 localhost podman[84498]: 2025-10-05 08:29:02.924374389 +0000 UTC m=+0.090850919 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, release=1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible) Oct 5 04:29:03 localhost podman[84498]: 2025-10-05 08:29:03.146215909 +0000 UTC m=+0.312692429 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:29:03 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:29:13 localhost systemd[1]: tmp-crun.mnsrNA.mount: Deactivated successfully. Oct 5 04:29:13 localhost podman[84528]: 2025-10-05 08:29:13.914454174 +0000 UTC m=+0.086967383 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:29:13 localhost systemd[1]: tmp-crun.LrQbcU.mount: Deactivated successfully. Oct 5 04:29:13 localhost podman[84530]: 2025-10-05 08:29:13.976383503 +0000 UTC m=+0.140329558 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Oct 5 04:29:14 localhost podman[84529]: 2025-10-05 08:29:14.019527921 +0000 UTC m=+0.186415256 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-cron, version=17.1.9) Oct 5 04:29:14 localhost podman[84530]: 2025-10-05 08:29:14.029351498 +0000 UTC m=+0.193297583 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:29:14 localhost podman[84528]: 2025-10-05 08:29:14.040087561 +0000 UTC m=+0.212600780 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1) Oct 5 04:29:14 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:29:14 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:29:14 localhost podman[84529]: 2025-10-05 08:29:14.057101845 +0000 UTC m=+0.223989120 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, container_name=logrotate_crond, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:29:14 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:29:15 localhost systemd[1]: tmp-crun.UOj4GB.mount: Deactivated successfully. Oct 5 04:29:15 localhost podman[84595]: 2025-10-05 08:29:15.926419829 +0000 UTC m=+0.094447686 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:29:15 localhost podman[84599]: 2025-10-05 08:29:15.978025217 +0000 UTC m=+0.141755408 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, name=rhosp17/openstack-collectd) Oct 5 04:29:15 localhost podman[84599]: 2025-10-05 08:29:15.987019032 +0000 UTC m=+0.150749253 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd) Oct 5 04:29:16 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:29:16 localhost podman[84598]: 2025-10-05 08:29:16.071812474 +0000 UTC m=+0.236467490 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 5 04:29:16 localhost podman[84597]: 2025-10-05 08:29:16.127681748 +0000 UTC m=+0.293869976 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:29:16 localhost podman[84597]: 2025-10-05 08:29:16.135756649 +0000 UTC m=+0.301944897 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:29:16 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:29:16 localhost podman[84595]: 2025-10-05 08:29:16.146623975 +0000 UTC m=+0.314651782 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:29:16 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:29:16 localhost podman[84596]: 2025-10-05 08:29:16.222265068 +0000 UTC m=+0.390550952 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4) Oct 5 04:29:16 localhost podman[84596]: 2025-10-05 08:29:16.240683351 +0000 UTC m=+0.408969265 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-07-21T13:28:44, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:29:16 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:29:16 localhost podman[84598]: 2025-10-05 08:29:16.455087848 +0000 UTC m=+0.619742894 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1, batch=17.1_20250721.1) Oct 5 04:29:16 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:29:16 localhost systemd[1]: tmp-crun.6FYCq9.mount: Deactivated successfully. Oct 5 04:29:19 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.60 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=2229 DF PROTO=TCP SPT=60822 DPT=19885 SEQ=1320638370 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A79C14B92000000000103030A) Oct 5 04:29:20 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.60 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=2230 DF PROTO=TCP SPT=60822 DPT=19885 SEQ=1320638370 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A79C14FAD000000000103030A) Oct 5 04:29:21 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.60 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=45460 DF PROTO=TCP SPT=60832 DPT=19885 SEQ=2210173908 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A79C150AE000000000103030A) Oct 5 04:29:22 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.60 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=45461 DF PROTO=TCP SPT=60832 DPT=19885 SEQ=2210173908 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A79C154AD000000000103030A) Oct 5 04:29:22 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.60 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=28223 DF PROTO=TCP SPT=60842 DPT=19885 SEQ=1945644743 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A79C156A4000000000103030A) Oct 5 04:29:23 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.60 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=28224 DF PROTO=TCP SPT=60842 DPT=19885 SEQ=1945644743 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A79C15AAD000000000103030A) Oct 5 04:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:29:23 localhost podman[84780]: 2025-10-05 08:29:23.912543835 +0000 UTC m=+0.079461568 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, container_name=nova_compute, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true) Oct 5 04:29:23 localhost podman[84780]: 2025-10-05 08:29:23.969349135 +0000 UTC m=+0.136266818 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:29:23 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:29:33 localhost podman[84806]: 2025-10-05 08:29:33.911236871 +0000 UTC m=+0.084111285 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:29:34 localhost podman[84806]: 2025-10-05 08:29:34.131247762 +0000 UTC m=+0.304122136 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:29:34 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:29:44 localhost systemd[1]: tmp-crun.T2HJNM.mount: Deactivated successfully. Oct 5 04:29:44 localhost podman[84858]: 2025-10-05 08:29:44.937432872 +0000 UTC m=+0.103116403 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc.) Oct 5 04:29:44 localhost podman[84860]: 2025-10-05 08:29:44.979818559 +0000 UTC m=+0.140906035 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., release=1) Oct 5 04:29:44 localhost podman[84858]: 2025-10-05 08:29:44.995322611 +0000 UTC m=+0.161006152 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:29:45 localhost podman[84859]: 2025-10-05 08:29:44.90215039 +0000 UTC m=+0.069455755 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:29:45 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:29:45 localhost podman[84860]: 2025-10-05 08:29:45.010082874 +0000 UTC m=+0.171170360 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:29:45 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:29:45 localhost podman[84859]: 2025-10-05 08:29:45.03634336 +0000 UTC m=+0.203648795 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, container_name=logrotate_crond, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:29:45 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:29:46 localhost podman[84954]: 2025-10-05 08:29:46.926190614 +0000 UTC m=+0.086897511 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible) Oct 5 04:29:46 localhost podman[84951]: 2025-10-05 08:29:46.971068798 +0000 UTC m=+0.139612938 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, architecture=x86_64, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, release=1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git) Oct 5 04:29:47 localhost podman[84952]: 2025-10-05 08:29:47.028278829 +0000 UTC m=+0.192783119 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc.) Oct 5 04:29:47 localhost podman[84953]: 2025-10-05 08:29:47.069088191 +0000 UTC m=+0.232407789 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, container_name=iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, architecture=x86_64) Oct 5 04:29:47 localhost podman[84952]: 2025-10-05 08:29:47.081085559 +0000 UTC m=+0.245589759 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1) Oct 5 04:29:47 localhost podman[84960]: 2025-10-05 08:29:47.081066289 +0000 UTC m=+0.237894829 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-collectd, version=17.1.9, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:29:47 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:29:47 localhost podman[84953]: 2025-10-05 08:29:47.107084588 +0000 UTC m=+0.270404226 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, tcib_managed=true) Oct 5 04:29:47 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:29:47 localhost podman[84960]: 2025-10-05 08:29:47.159159218 +0000 UTC m=+0.315987818 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, io.buildah.version=1.33.12, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:29:47 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:29:47 localhost podman[84951]: 2025-10-05 08:29:47.212136393 +0000 UTC m=+0.380680573 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible) Oct 5 04:29:47 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:29:47 localhost podman[84954]: 2025-10-05 08:29:47.281587717 +0000 UTC m=+0.442294634 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:29:47 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:29:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:29:54 localhost recover_tripleo_nova_virtqemud[85064]: 63458 Oct 5 04:29:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:29:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:29:54 localhost systemd[1]: tmp-crun.jFvvpF.mount: Deactivated successfully. Oct 5 04:29:54 localhost podman[85057]: 2025-10-05 08:29:54.917236403 +0000 UTC m=+0.083298273 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, version=17.1.9, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:29:54 localhost podman[85057]: 2025-10-05 08:29:54.975378009 +0000 UTC m=+0.141439839 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:29:54 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:30:04 localhost podman[85086]: 2025-10-05 08:30:04.920236947 +0000 UTC m=+0.087634682 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:30:05 localhost podman[85086]: 2025-10-05 08:30:05.115344267 +0000 UTC m=+0.282742002 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:30:05 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:30:15 localhost systemd[1]: tmp-crun.jsrOHS.mount: Deactivated successfully. Oct 5 04:30:15 localhost podman[85119]: 2025-10-05 08:30:15.952311078 +0000 UTC m=+0.112951632 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, release=1, build-date=2025-07-21T15:29:47, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:30:15 localhost podman[85118]: 2025-10-05 08:30:15.930251076 +0000 UTC m=+0.095127985 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:30:15 localhost podman[85119]: 2025-10-05 08:30:15.984377143 +0000 UTC m=+0.145017707 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T15:29:47) Oct 5 04:30:15 localhost podman[85117]: 2025-10-05 08:30:15.994617222 +0000 UTC m=+0.165025742 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:30:15 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:30:16 localhost podman[85118]: 2025-10-05 08:30:16.013131417 +0000 UTC m=+0.178008346 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 04:30:16 localhost podman[85117]: 2025-10-05 08:30:16.021559426 +0000 UTC m=+0.191967916 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:30:16 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:30:16 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:30:17 localhost podman[85200]: 2025-10-05 08:30:17.92202873 +0000 UTC m=+0.074158254 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, release=2, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:04:03) Oct 5 04:30:17 localhost podman[85200]: 2025-10-05 08:30:17.930053529 +0000 UTC m=+0.082183023 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3) Oct 5 04:30:17 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:30:17 localhost systemd[1]: tmp-crun.zx8yQE.mount: Deactivated successfully. Oct 5 04:30:17 localhost podman[85199]: 2025-10-05 08:30:17.978359407 +0000 UTC m=+0.129980547 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 04:30:17 localhost podman[85191]: 2025-10-05 08:30:17.9810312 +0000 UTC m=+0.146983441 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1) Oct 5 04:30:18 localhost podman[85192]: 2025-10-05 08:30:18.01884072 +0000 UTC m=+0.175923238 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:30:18 localhost podman[85192]: 2025-10-05 08:30:18.043058271 +0000 UTC m=+0.200140809 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:30:18 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:30:18 localhost podman[85191]: 2025-10-05 08:30:18.098487803 +0000 UTC m=+0.264439984 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, release=1, version=17.1.9) Oct 5 04:30:18 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:30:18 localhost podman[85198]: 2025-10-05 08:30:18.189831394 +0000 UTC m=+0.343668354 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:30:18 localhost podman[85198]: 2025-10-05 08:30:18.199077117 +0000 UTC m=+0.352914117 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:30:18 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:30:18 localhost podman[85199]: 2025-10-05 08:30:18.363276035 +0000 UTC m=+0.514897155 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:30:18 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:30:25 localhost podman[85370]: 2025-10-05 08:30:25.896043895 +0000 UTC m=+0.065540228 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_id=tripleo_step5) Oct 5 04:30:25 localhost podman[85370]: 2025-10-05 08:30:25.926210568 +0000 UTC m=+0.095706911 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.9, vcs-type=git, release=1, tcib_managed=true, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:30:25 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:30:35 localhost podman[85397]: 2025-10-05 08:30:35.910594644 +0000 UTC m=+0.079150369 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:07:59, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:30:36 localhost podman[85397]: 2025-10-05 08:30:36.102219221 +0000 UTC m=+0.270775006 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:30:36 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:30:40 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Oct 5 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:30:46 localhost podman[85470]: 2025-10-05 08:30:46.907860165 +0000 UTC m=+0.076769545 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true) Oct 5 04:30:46 localhost systemd[1]: tmp-crun.rF2jFd.mount: Deactivated successfully. Oct 5 04:30:46 localhost podman[85472]: 2025-10-05 08:30:46.953209852 +0000 UTC m=+0.112909021 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 04:30:46 localhost podman[85471]: 2025-10-05 08:30:46.967596564 +0000 UTC m=+0.133987776 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:30:46 localhost podman[85470]: 2025-10-05 08:30:46.988310418 +0000 UTC m=+0.157219768 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true) Oct 5 04:30:46 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:30:46 localhost podman[85472]: 2025-10-05 08:30:46.998431485 +0000 UTC m=+0.158130644 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, batch=17.1_20250721.1) Oct 5 04:30:47 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:30:47 localhost podman[85471]: 2025-10-05 08:30:47.055169962 +0000 UTC m=+0.221561194 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, com.redhat.component=openstack-cron-container) Oct 5 04:30:47 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:30:48 localhost systemd[1]: tmp-crun.9VuUam.mount: Deactivated successfully. Oct 5 04:30:48 localhost podman[85543]: 2025-10-05 08:30:48.920861358 +0000 UTC m=+0.091681651 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, distribution-scope=public, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 5 04:30:48 localhost systemd[1]: tmp-crun.qCN8US.mount: Deactivated successfully. Oct 5 04:30:48 localhost podman[85544]: 2025-10-05 08:30:48.940446632 +0000 UTC m=+0.106425454 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, architecture=x86_64) Oct 5 04:30:48 localhost podman[85544]: 2025-10-05 08:30:48.965581557 +0000 UTC m=+0.131560419 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, container_name=ovn_controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:30:48 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:30:48 localhost podman[85545]: 2025-10-05 08:30:48.984944196 +0000 UTC m=+0.148860111 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, release=1, container_name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:30:48 localhost podman[85545]: 2025-10-05 08:30:48.99061058 +0000 UTC m=+0.154526485 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid) Oct 5 04:30:49 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:30:49 localhost podman[85546]: 2025-10-05 08:30:49.013497904 +0000 UTC m=+0.177636295 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:30:49 localhost podman[85543]: 2025-10-05 08:30:49.037382856 +0000 UTC m=+0.208203109 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:30:49 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:30:49 localhost podman[85552]: 2025-10-05 08:30:49.041238551 +0000 UTC m=+0.201713883 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:30:49 localhost podman[85552]: 2025-10-05 08:30:49.123054042 +0000 UTC m=+0.283529354 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, version=17.1.9, build-date=2025-07-21T13:04:03, release=2, batch=17.1_20250721.1, container_name=collectd, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Oct 5 04:30:49 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:30:49 localhost podman[85546]: 2025-10-05 08:30:49.377375399 +0000 UTC m=+0.541513820 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:30:49 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:30:56 localhost podman[85645]: 2025-10-05 08:30:56.89807308 +0000 UTC m=+0.069538258 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, vcs-type=git, build-date=2025-07-21T14:48:37, container_name=nova_compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:30:56 localhost podman[85645]: 2025-10-05 08:30:56.927107572 +0000 UTC m=+0.098572720 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:30:56 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:31:06 localhost systemd[1]: tmp-crun.18YafY.mount: Deactivated successfully. Oct 5 04:31:06 localhost podman[85672]: 2025-10-05 08:31:06.931190783 +0000 UTC m=+0.093561972 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, release=1, io.buildah.version=1.33.12) Oct 5 04:31:07 localhost podman[85672]: 2025-10-05 08:31:07.121495194 +0000 UTC m=+0.283866393 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step1, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, version=17.1.9) Oct 5 04:31:07 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:31:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:31:11 localhost recover_tripleo_nova_virtqemud[85705]: 63458 Oct 5 04:31:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:31:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:31:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:31:17 localhost podman[85707]: 2025-10-05 08:31:17.920945699 +0000 UTC m=+0.085699447 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:31:17 localhost podman[85707]: 2025-10-05 08:31:17.952060838 +0000 UTC m=+0.116814516 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, release=1, version=17.1.9) Oct 5 04:31:17 localhost systemd[1]: tmp-crun.d3LqhI.mount: Deactivated successfully. Oct 5 04:31:17 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:31:17 localhost podman[85708]: 2025-10-05 08:31:17.97302063 +0000 UTC m=+0.136124034 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:31:17 localhost podman[85708]: 2025-10-05 08:31:17.980852784 +0000 UTC m=+0.143956118 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:31:18 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:31:18 localhost podman[85709]: 2025-10-05 08:31:18.023434065 +0000 UTC m=+0.183727622 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:29:47, release=1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 5 04:31:18 localhost podman[85709]: 2025-10-05 08:31:18.079302509 +0000 UTC m=+0.239596026 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:31:18 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:31:18 localhost systemd[1]: tmp-crun.uExYu1.mount: Deactivated successfully. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:31:19 localhost podman[85787]: 2025-10-05 08:31:19.917828453 +0000 UTC m=+0.074783672 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:31:19 localhost podman[85781]: 2025-10-05 08:31:19.966909932 +0000 UTC m=+0.123408487 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12) Oct 5 04:31:19 localhost podman[85781]: 2025-10-05 08:31:19.978335313 +0000 UTC m=+0.134833898 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true) Oct 5 04:31:19 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:31:20 localhost podman[85780]: 2025-10-05 08:31:20.0321363 +0000 UTC m=+0.193624372 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true) Oct 5 04:31:20 localhost podman[85780]: 2025-10-05 08:31:20.050619205 +0000 UTC m=+0.212107247 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:31:20 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:31:20 localhost podman[85779]: 2025-10-05 08:31:20.130725339 +0000 UTC m=+0.295649604 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:31:20 localhost podman[85790]: 2025-10-05 08:31:20.203200136 +0000 UTC m=+0.355449716 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=2, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:31:20 localhost podman[85779]: 2025-10-05 08:31:20.214613197 +0000 UTC m=+0.379537432 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, version=17.1.9, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, architecture=x86_64, distribution-scope=public) Oct 5 04:31:20 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:31:20 localhost podman[85790]: 2025-10-05 08:31:20.267393437 +0000 UTC m=+0.419643067 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=collectd, distribution-scope=public, config_id=tripleo_step3, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:31:20 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:31:20 localhost podman[85787]: 2025-10-05 08:31:20.288408649 +0000 UTC m=+0.445363838 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:31:20 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:31:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:31:27 localhost podman[85965]: 2025-10-05 08:31:27.91467309 +0000 UTC m=+0.081089473 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, version=17.1.9) Oct 5 04:31:27 localhost podman[85965]: 2025-10-05 08:31:27.968644482 +0000 UTC m=+0.135060845 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 04:31:27 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:31:37 localhost podman[85992]: 2025-10-05 08:31:37.907654449 +0000 UTC m=+0.078923073 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.9, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public) Oct 5 04:31:38 localhost podman[85992]: 2025-10-05 08:31:38.099688047 +0000 UTC m=+0.270956701 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:31:38 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:31:48 localhost systemd[1]: tmp-crun.ZlM8as.mount: Deactivated successfully. Oct 5 04:31:48 localhost podman[86065]: 2025-10-05 08:31:48.926106456 +0000 UTC m=+0.089822450 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, tcib_managed=true, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-cron-container) Oct 5 04:31:48 localhost podman[86065]: 2025-10-05 08:31:48.964052092 +0000 UTC m=+0.127768036 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:31:48 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:31:49 localhost podman[86064]: 2025-10-05 08:31:48.96803883 +0000 UTC m=+0.133761409 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:31:49 localhost podman[86064]: 2025-10-05 08:31:49.053084929 +0000 UTC m=+0.218807518 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, release=1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:31:49 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:31:49 localhost podman[86066]: 2025-10-05 08:31:49.020905682 +0000 UTC m=+0.179954239 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Oct 5 04:31:49 localhost podman[86066]: 2025-10-05 08:31:49.106166267 +0000 UTC m=+0.265214834 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1) Oct 5 04:31:49 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:31:50 localhost systemd[1]: tmp-crun.42yZB2.mount: Deactivated successfully. Oct 5 04:31:50 localhost systemd[1]: tmp-crun.V0R1DW.mount: Deactivated successfully. Oct 5 04:31:51 localhost podman[86144]: 2025-10-05 08:31:51.000532425 +0000 UTC m=+0.150311111 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:04:03, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:31:51 localhost podman[86144]: 2025-10-05 08:31:51.011286258 +0000 UTC m=+0.161064904 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-type=git, version=17.1.9, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, release=2, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd) Oct 5 04:31:51 localhost podman[86136]: 2025-10-05 08:31:50.960645156 +0000 UTC m=+0.117708001 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:31:51 localhost podman[86137]: 2025-10-05 08:31:50.981434934 +0000 UTC m=+0.138016775 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, container_name=iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, vcs-type=git) Oct 5 04:31:51 localhost podman[86135]: 2025-10-05 08:31:50.930228607 +0000 UTC m=+0.093839831 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:31:51 localhost podman[86136]: 2025-10-05 08:31:51.039839287 +0000 UTC m=+0.196902142 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, distribution-scope=public) Oct 5 04:31:51 localhost podman[86143]: 2025-10-05 08:31:51.046880828 +0000 UTC m=+0.200482178 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 5 04:31:51 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:31:51 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:31:51 localhost podman[86135]: 2025-10-05 08:31:51.11185673 +0000 UTC m=+0.275468014 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, release=1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53) Oct 5 04:31:51 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:31:51 localhost podman[86137]: 2025-10-05 08:31:51.165905955 +0000 UTC m=+0.322487836 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Oct 5 04:31:51 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:31:51 localhost podman[86143]: 2025-10-05 08:31:51.392093883 +0000 UTC m=+0.545695213 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:31:51 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:31:58 localhost podman[86246]: 2025-10-05 08:31:58.908719794 +0000 UTC m=+0.077203787 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:31:58 localhost podman[86246]: 2025-10-05 08:31:58.936118881 +0000 UTC m=+0.104602874 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=) Oct 5 04:31:58 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:32:08 localhost podman[86271]: 2025-10-05 08:32:08.908477368 +0000 UTC m=+0.077045892 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.openshift.expose-services=) Oct 5 04:32:09 localhost podman[86271]: 2025-10-05 08:32:09.103246801 +0000 UTC m=+0.271815375 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9) Oct 5 04:32:09 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:32:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:32:19 localhost podman[86301]: 2025-10-05 08:32:19.914476538 +0000 UTC m=+0.081196837 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:32:19 localhost podman[86301]: 2025-10-05 08:32:19.925159989 +0000 UTC m=+0.091880278 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, version=17.1.9, architecture=x86_64) Oct 5 04:32:19 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:32:19 localhost podman[86302]: 2025-10-05 08:32:19.972170472 +0000 UTC m=+0.134445889 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public) Oct 5 04:32:20 localhost systemd[1]: tmp-crun.QVsxpo.mount: Deactivated successfully. Oct 5 04:32:20 localhost podman[86300]: 2025-10-05 08:32:20.019544703 +0000 UTC m=+0.189419498 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:32:20 localhost podman[86302]: 2025-10-05 08:32:20.026079992 +0000 UTC m=+0.188355409 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, release=1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc.) Oct 5 04:32:20 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:32:20 localhost podman[86300]: 2025-10-05 08:32:20.078259394 +0000 UTC m=+0.248134219 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:32:20 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:32:21 localhost podman[86389]: 2025-10-05 08:32:21.756316852 +0000 UTC m=+0.091375422 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9) Oct 5 04:32:21 localhost systemd[1]: tmp-crun.6YaFpc.mount: Deactivated successfully. Oct 5 04:32:21 localhost podman[86390]: 2025-10-05 08:32:21.82845382 +0000 UTC m=+0.159384328 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:32:21 localhost podman[86396]: 2025-10-05 08:32:21.865211362 +0000 UTC m=+0.191704559 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 04:32:21 localhost podman[86390]: 2025-10-05 08:32:21.886573745 +0000 UTC m=+0.217504253 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, release=1, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team) Oct 5 04:32:21 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:32:21 localhost podman[86389]: 2025-10-05 08:32:21.939007576 +0000 UTC m=+0.274066116 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:32:21 localhost podman[86402]: 2025-10-05 08:32:21.787938495 +0000 UTC m=+0.112626472 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, release=2, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-type=git, maintainer=OpenStack TripleO Team) Oct 5 04:32:21 localhost podman[86388]: 2025-10-05 08:32:21.978901033 +0000 UTC m=+0.317236083 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, container_name=ovn_metadata_agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.expose-services=) Oct 5 04:32:21 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:32:22 localhost podman[86402]: 2025-10-05 08:32:22.030270764 +0000 UTC m=+0.354958741 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:32:22 localhost podman[86388]: 2025-10-05 08:32:22.054239458 +0000 UTC m=+0.392574528 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:32:22 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:32:22 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:32:22 localhost podman[86396]: 2025-10-05 08:32:22.185985082 +0000 UTC m=+0.512478219 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container) Oct 5 04:32:22 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:32:29 localhost podman[86559]: 2025-10-05 08:32:29.917882603 +0000 UTC m=+0.084276990 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, tcib_managed=true, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.expose-services=) Oct 5 04:32:29 localhost podman[86559]: 2025-10-05 08:32:29.970605051 +0000 UTC m=+0.136999388 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-type=git, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:32:29 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:32:39 localhost systemd[1]: tmp-crun.tMl3AC.mount: Deactivated successfully. Oct 5 04:32:39 localhost podman[86586]: 2025-10-05 08:32:39.925913983 +0000 UTC m=+0.093087599 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64) Oct 5 04:32:40 localhost podman[86586]: 2025-10-05 08:32:40.130213826 +0000 UTC m=+0.297387412 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:32:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:32:40 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:32:40 localhost recover_tripleo_nova_virtqemud[86617]: 63458 Oct 5 04:32:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:32:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:32:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:32:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:32:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:32:50 localhost systemd[1]: tmp-crun.fIPOak.mount: Deactivated successfully. Oct 5 04:32:50 localhost podman[86664]: 2025-10-05 08:32:50.969836798 +0000 UTC m=+0.132339791 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9) Oct 5 04:32:50 localhost podman[86664]: 2025-10-05 08:32:50.984115378 +0000 UTC m=+0.146618401 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:32:50 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:32:51 localhost podman[86663]: 2025-10-05 08:32:50.937339621 +0000 UTC m=+0.102688772 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.openshift.expose-services=, release=1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:32:51 localhost podman[86663]: 2025-10-05 08:32:51.068205141 +0000 UTC m=+0.233554342 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:32:51 localhost podman[86665]: 2025-10-05 08:32:51.080818584 +0000 UTC m=+0.241767115 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:32:51 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:32:51 localhost podman[86665]: 2025-10-05 08:32:51.10815274 +0000 UTC m=+0.269101301 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47) Oct 5 04:32:51 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:32:52 localhost podman[86734]: 2025-10-05 08:32:52.021960743 +0000 UTC m=+0.085714558 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:32:52 localhost podman[86734]: 2025-10-05 08:32:52.036202241 +0000 UTC m=+0.099956056 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:32:52 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:32:52 localhost podman[86751]: 2025-10-05 08:32:52.107770154 +0000 UTC m=+0.077701141 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, vcs-type=git, container_name=ovn_controller) Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:32:52 localhost podman[86751]: 2025-10-05 08:32:52.133476275 +0000 UTC m=+0.103407272 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:32:52 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:32:52 localhost podman[86776]: 2025-10-05 08:32:52.232141465 +0000 UTC m=+0.093684065 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3) Oct 5 04:32:52 localhost podman[86776]: 2025-10-05 08:32:52.239830485 +0000 UTC m=+0.101373135 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, vcs-type=git, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, name=rhosp17/openstack-collectd) Oct 5 04:32:52 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:32:52 localhost podman[86772]: 2025-10-05 08:32:52.370716625 +0000 UTC m=+0.235511674 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true) Oct 5 04:32:52 localhost podman[86804]: 2025-10-05 08:32:52.339560936 +0000 UTC m=+0.099666360 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:32:52 localhost podman[86772]: 2025-10-05 08:32:52.42224273 +0000 UTC m=+0.287037769 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:32:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:32:52 localhost podman[86804]: 2025-10-05 08:32:52.739169324 +0000 UTC m=+0.499274758 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37) Oct 5 04:32:52 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:32:52 localhost systemd[1]: tmp-crun.EeLQSs.mount: Deactivated successfully. Oct 5 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:33:00 localhost podman[86840]: 2025-10-05 08:33:00.910699676 +0000 UTC m=+0.078787150 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1) Oct 5 04:33:00 localhost podman[86840]: 2025-10-05 08:33:00.965480491 +0000 UTC m=+0.133567965 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team) Oct 5 04:33:00 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:33:10 localhost podman[86865]: 2025-10-05 08:33:10.91223042 +0000 UTC m=+0.081548785 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, release=1, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:33:11 localhost podman[86865]: 2025-10-05 08:33:11.106661414 +0000 UTC m=+0.275979789 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, version=17.1.9, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:33:11 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:33:21 localhost systemd[1]: tmp-crun.iolBJc.mount: Deactivated successfully. Oct 5 04:33:21 localhost podman[86894]: 2025-10-05 08:33:21.936141029 +0000 UTC m=+0.099824524 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, release=1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team) Oct 5 04:33:21 localhost podman[86895]: 2025-10-05 08:33:21.985932966 +0000 UTC m=+0.144424389 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cron, release=1, io.buildah.version=1.33.12) Oct 5 04:33:21 localhost podman[86894]: 2025-10-05 08:33:21.990233954 +0000 UTC m=+0.153917439 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, release=1, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:33:22 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:33:22 localhost podman[86896]: 2025-10-05 08:33:22.037622507 +0000 UTC m=+0.193314504 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:33:22 localhost podman[86895]: 2025-10-05 08:33:22.047116025 +0000 UTC m=+0.205607448 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12) Oct 5 04:33:22 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:33:22 localhost podman[86896]: 2025-10-05 08:33:22.089294925 +0000 UTC m=+0.244986892 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:33:22 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:33:22 localhost podman[86967]: 2025-10-05 08:33:22.150508945 +0000 UTC m=+0.063281276 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true) Oct 5 04:33:22 localhost podman[86967]: 2025-10-05 08:33:22.16311678 +0000 UTC m=+0.075889131 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:33:22 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:33:22 localhost podman[86988]: 2025-10-05 08:33:22.261239076 +0000 UTC m=+0.066571477 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:33:22 localhost podman[86988]: 2025-10-05 08:33:22.282351021 +0000 UTC m=+0.087683442 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:33:22 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:33:22 localhost podman[87012]: 2025-10-05 08:33:22.38421929 +0000 UTC m=+0.073829695 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:04:03, container_name=collectd, version=17.1.9, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:33:22 localhost podman[87012]: 2025-10-05 08:33:22.394405387 +0000 UTC m=+0.084015812 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, container_name=collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible) Oct 5 04:33:22 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:33:22 localhost podman[87032]: 2025-10-05 08:33:22.912891008 +0000 UTC m=+0.078962204 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:33:22 localhost systemd[1]: tmp-crun.fWK5Le.mount: Deactivated successfully. Oct 5 04:33:22 localhost podman[87031]: 2025-10-05 08:33:22.980780061 +0000 UTC m=+0.148912054 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1) Oct 5 04:33:23 localhost podman[87031]: 2025-10-05 08:33:23.046169524 +0000 UTC m=+0.214301467 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:33:23 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:33:23 localhost podman[87032]: 2025-10-05 08:33:23.285237174 +0000 UTC m=+0.451308380 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, container_name=nova_migration_target, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:33:23 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:33:31 localhost systemd[1]: tmp-crun.W9pnJi.mount: Deactivated successfully. Oct 5 04:33:31 localhost podman[87155]: 2025-10-05 08:33:31.91600469 +0000 UTC m=+0.087890758 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute) Oct 5 04:33:31 localhost podman[87155]: 2025-10-05 08:33:31.944552498 +0000 UTC m=+0.116438526 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, container_name=nova_compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9) Oct 5 04:33:31 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:33:41 localhost systemd[1]: tmp-crun.1j7von.mount: Deactivated successfully. Oct 5 04:33:41 localhost podman[87181]: 2025-10-05 08:33:41.878965173 +0000 UTC m=+0.054463178 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:33:42 localhost podman[87181]: 2025-10-05 08:33:42.034223557 +0000 UTC m=+0.209721592 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, tcib_managed=true, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, vcs-type=git, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:33:42 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:33:52 localhost systemd[1]: tmp-crun.bjH2l6.mount: Deactivated successfully. Oct 5 04:33:52 localhost systemd[1]: tmp-crun.4qm6W4.mount: Deactivated successfully. Oct 5 04:33:52 localhost podman[87254]: 2025-10-05 08:33:52.963933104 +0000 UTC m=+0.089037909 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, release=1) Oct 5 04:33:52 localhost podman[87257]: 2025-10-05 08:33:52.968071838 +0000 UTC m=+0.083856769 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, release=2, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12) Oct 5 04:33:52 localhost podman[87257]: 2025-10-05 08:33:52.973047423 +0000 UTC m=+0.088832314 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, release=2, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, container_name=collectd, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:33:52 localhost podman[87254]: 2025-10-05 08:33:52.979021866 +0000 UTC m=+0.104126671 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:33:52 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:33:52 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:33:53 localhost podman[87255]: 2025-10-05 08:33:53.012449788 +0000 UTC m=+0.134251063 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:33:53 localhost podman[87258]: 2025-10-05 08:33:53.018558214 +0000 UTC m=+0.131963800 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, tcib_managed=true, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:33:53 localhost podman[87255]: 2025-10-05 08:33:53.032020792 +0000 UTC m=+0.153822077 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:33:53 localhost podman[87258]: 2025-10-05 08:33:53.050123145 +0000 UTC m=+0.163528741 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=logrotate_crond, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:33:53 localhost podman[87256]: 2025-10-05 08:33:53.056419577 +0000 UTC m=+0.173326598 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, tcib_managed=true, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 5 04:33:53 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:33:53 localhost podman[87256]: 2025-10-05 08:33:53.065961477 +0000 UTC m=+0.182868488 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:27:15) Oct 5 04:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:33:53 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:33:53 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:33:53 localhost podman[87370]: 2025-10-05 08:33:53.130515298 +0000 UTC m=+0.046455598 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, release=1) Oct 5 04:33:53 localhost podman[87265]: 2025-10-05 08:33:53.16175702 +0000 UTC m=+0.277649263 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:33:53 localhost podman[87370]: 2025-10-05 08:33:53.198104401 +0000 UTC m=+0.114044681 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:33:53 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:33:53 localhost podman[87265]: 2025-10-05 08:33:53.207455066 +0000 UTC m=+0.323347299 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi) Oct 5 04:33:53 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:33:53 localhost podman[87412]: 2025-10-05 08:33:53.883166077 +0000 UTC m=+0.055152766 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:33:54 localhost podman[87412]: 2025-10-05 08:33:54.256134116 +0000 UTC m=+0.428120805 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, config_id=tripleo_step4) Oct 5 04:33:54 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:34:02 localhost podman[87434]: 2025-10-05 08:34:02.942823261 +0000 UTC m=+0.111122468 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:34:02 localhost podman[87434]: 2025-10-05 08:34:02.974030536 +0000 UTC m=+0.142329733 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, config_id=tripleo_step5, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:34:02 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:34:12 localhost systemd[1]: tmp-crun.8xnkAt.mount: Deactivated successfully. Oct 5 04:34:12 localhost podman[87462]: 2025-10-05 08:34:12.916216435 +0000 UTC m=+0.088694432 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, release=1, tcib_managed=true, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=metrics_qdr) Oct 5 04:34:13 localhost podman[87462]: 2025-10-05 08:34:13.13425233 +0000 UTC m=+0.306730337 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:34:13 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:34:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:34:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:34:23 localhost recover_tripleo_nova_virtqemud[87536]: 63458 Oct 5 04:34:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:34:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:34:23 localhost podman[87505]: 2025-10-05 08:34:23.943300732 +0000 UTC m=+0.094752777 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=2, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd) Oct 5 04:34:23 localhost podman[87505]: 2025-10-05 08:34:23.950978523 +0000 UTC m=+0.102430568 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:34:23 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:34:23 localhost podman[87495]: 2025-10-05 08:34:23.986655561 +0000 UTC m=+0.133441228 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, container_name=iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid) Oct 5 04:34:24 localhost podman[87493]: 2025-10-05 08:34:24.047170359 +0000 UTC m=+0.208370861 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, architecture=x86_64, release=1, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:34:24 localhost podman[87493]: 2025-10-05 08:34:24.100187082 +0000 UTC m=+0.261387624 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git) Oct 5 04:34:24 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:34:24 localhost podman[87492]: 2025-10-05 08:34:24.101840437 +0000 UTC m=+0.264892800 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 04:34:24 localhost podman[87494]: 2025-10-05 08:34:24.161227475 +0000 UTC m=+0.316857484 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, release=1, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:34:24 localhost podman[87508]: 2025-10-05 08:34:24.208433448 +0000 UTC m=+0.340268776 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, build-date=2025-07-21T13:07:52, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:34:24 localhost podman[87492]: 2025-10-05 08:34:24.237898925 +0000 UTC m=+0.400951248 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:34:24 localhost podman[87494]: 2025-10-05 08:34:24.238386968 +0000 UTC m=+0.394016967 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:45:33) Oct 5 04:34:24 localhost podman[87508]: 2025-10-05 08:34:24.244125316 +0000 UTC m=+0.375960634 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:34:24 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:34:24 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:34:24 localhost podman[87495]: 2025-10-05 08:34:24.278715274 +0000 UTC m=+0.425500961 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:34:24 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:34:24 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:34:24 localhost podman[87634]: 2025-10-05 08:34:24.40957445 +0000 UTC m=+0.095594101 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team) Oct 5 04:34:24 localhost podman[87522]: 2025-10-05 08:34:24.367455245 +0000 UTC m=+0.506023847 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Oct 5 04:34:24 localhost podman[87522]: 2025-10-05 08:34:24.45006352 +0000 UTC m=+0.588632092 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:34:24 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:34:24 localhost podman[87634]: 2025-10-05 08:34:24.744356434 +0000 UTC m=+0.430376115 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:34:24 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:34:24 localhost systemd[1]: tmp-crun.0DTDig.mount: Deactivated successfully. Oct 5 04:34:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:34:33 localhost systemd[1]: tmp-crun.278GWZ.mount: Deactivated successfully. Oct 5 04:34:33 localhost podman[87748]: 2025-10-05 08:34:33.926096505 +0000 UTC m=+0.090861702 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5) Oct 5 04:34:33 localhost podman[87748]: 2025-10-05 08:34:33.956150748 +0000 UTC m=+0.120915975 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, io.buildah.version=1.33.12, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=) Oct 5 04:34:33 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:34:43 localhost podman[87775]: 2025-10-05 08:34:43.908465844 +0000 UTC m=+0.080473587 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:34:44 localhost podman[87775]: 2025-10-05 08:34:44.101008951 +0000 UTC m=+0.273016674 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, io.buildah.version=1.33.12, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64) Oct 5 04:34:44 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:34:54 localhost podman[87857]: 2025-10-05 08:34:54.93685053 +0000 UTC m=+0.087002606 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc.) Oct 5 04:34:54 localhost systemd[1]: tmp-crun.5TRqmR.mount: Deactivated successfully. Oct 5 04:34:55 localhost podman[87849]: 2025-10-05 08:34:55.010038565 +0000 UTC m=+0.174325678 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:34:55 localhost podman[87850]: 2025-10-05 08:34:55.049728912 +0000 UTC m=+0.210046007 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:34:55 localhost podman[87851]: 2025-10-05 08:34:54.959836949 +0000 UTC m=+0.108177916 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64) Oct 5 04:34:55 localhost podman[87849]: 2025-10-05 08:34:55.082372947 +0000 UTC m=+0.246659990 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:34:55 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87870]: 2025-10-05 08:34:55.096627988 +0000 UTC m=+0.242442145 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1) Oct 5 04:34:55 localhost podman[87850]: 2025-10-05 08:34:55.111124825 +0000 UTC m=+0.271441940 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, release=1, tcib_managed=true, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Oct 5 04:34:55 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87863]: 2025-10-05 08:34:55.098856959 +0000 UTC m=+0.247457963 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Oct 5 04:34:55 localhost podman[87851]: 2025-10-05 08:34:55.152109508 +0000 UTC m=+0.300450485 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:34:55 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87869]: 2025-10-05 08:34:55.161518376 +0000 UTC m=+0.308776822 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, build-date=2025-07-21T13:07:52) Oct 5 04:34:55 localhost podman[87870]: 2025-10-05 08:34:55.167903161 +0000 UTC m=+0.313717288 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4) Oct 5 04:34:55 localhost podman[87863]: 2025-10-05 08:34:55.182043058 +0000 UTC m=+0.330644032 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.openshift.expose-services=, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Oct 5 04:34:55 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:34:55 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87869]: 2025-10-05 08:34:55.196176105 +0000 UTC m=+0.343434571 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Oct 5 04:34:55 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87848]: 2025-10-05 08:34:55.254883534 +0000 UTC m=+0.421236984 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:34:55 localhost podman[87857]: 2025-10-05 08:34:55.282047459 +0000 UTC m=+0.432199535 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, release=1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:34:55 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:34:55 localhost podman[87848]: 2025-10-05 08:34:55.292981308 +0000 UTC m=+0.459334738 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Oct 5 04:34:55 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:34:55 localhost systemd[1]: tmp-crun.RfdZCD.mount: Deactivated successfully. Oct 5 04:35:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:35:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 549 writes, 2255 keys, 549 commit groups, 1.0 writes per commit group, ingest: 3.11 MB, 0.01 MB/s#012Interval WAL: 549 writes, 193 syncs, 2.84 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:35:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:35:04 localhost podman[88021]: 2025-10-05 08:35:04.913907354 +0000 UTC m=+0.079343665 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1) Oct 5 04:35:04 localhost podman[88021]: 2025-10-05 08:35:04.967293987 +0000 UTC m=+0.132730268 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:35:04 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:35:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:35:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5637 writes, 24K keys, 5637 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5637 writes, 711 syncs, 7.93 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 400 writes, 1570 keys, 400 commit groups, 1.0 writes per commit group, ingest: 1.73 MB, 0.00 MB/s#012Interval WAL: 400 writes, 139 syncs, 2.88 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:35:14 localhost podman[88049]: 2025-10-05 08:35:14.913976019 +0000 UTC m=+0.082125921 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:07:59, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 5 04:35:15 localhost podman[88049]: 2025-10-05 08:35:15.108059217 +0000 UTC m=+0.276209089 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, release=1, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:35:15 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:35:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:35:25 localhost podman[88094]: 2025-10-05 08:35:25.953827947 +0000 UTC m=+0.092293390 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, container_name=collectd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=) Oct 5 04:35:25 localhost systemd[1]: tmp-crun.fF1Ma4.mount: Deactivated successfully. Oct 5 04:35:25 localhost podman[88094]: 2025-10-05 08:35:25.988077246 +0000 UTC m=+0.126542639 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:35:26 localhost podman[88081]: 2025-10-05 08:35:26.016594487 +0000 UTC m=+0.166167504 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, release=1, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Oct 5 04:35:26 localhost podman[88087]: 2025-10-05 08:35:26.048006088 +0000 UTC m=+0.186837431 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1) Oct 5 04:35:26 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88079]: 2025-10-05 08:35:25.997969656 +0000 UTC m=+0.155506472 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:35:26 localhost podman[88079]: 2025-10-05 08:35:26.076021595 +0000 UTC m=+0.233558421 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git) Oct 5 04:35:26 localhost podman[88081]: 2025-10-05 08:35:26.083536732 +0000 UTC m=+0.233109769 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:35:26 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:35:26 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88111]: 2025-10-05 08:35:26.116695951 +0000 UTC m=+0.247306878 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:35:26 localhost podman[88080]: 2025-10-05 08:35:26.163294107 +0000 UTC m=+0.316549515 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:35:26 localhost podman[88111]: 2025-10-05 08:35:26.175489032 +0000 UTC m=+0.306100009 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, version=17.1.9, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:35:26 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88088]: 2025-10-05 08:35:25.992653582 +0000 UTC m=+0.136188684 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:35:26 localhost podman[88101]: 2025-10-05 08:35:26.209908705 +0000 UTC m=+0.342795065 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, container_name=logrotate_crond) Oct 5 04:35:26 localhost podman[88080]: 2025-10-05 08:35:26.21925239 +0000 UTC m=+0.372507748 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:35:26 localhost podman[88087]: 2025-10-05 08:35:26.228225727 +0000 UTC m=+0.367057100 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., release=1, container_name=iscsid, vcs-type=git, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:35:26 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:35:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88101]: 2025-10-05 08:35:26.269522298 +0000 UTC m=+0.402408648 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.9) Oct 5 04:35:26 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:35:26 localhost podman[88088]: 2025-10-05 08:35:26.394539494 +0000 UTC m=+0.538074626 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12) Oct 5 04:35:26 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:35:26 localhost systemd[1]: tmp-crun.5CpCh6.mount: Deactivated successfully. Oct 5 04:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:35:35 localhost podman[88328]: 2025-10-05 08:35:35.917078513 +0000 UTC m=+0.082183523 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:35:35 localhost podman[88328]: 2025-10-05 08:35:35.944889215 +0000 UTC m=+0.109994205 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:35:35 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:35:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:35:37 localhost recover_tripleo_nova_virtqemud[88355]: 63458 Oct 5 04:35:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:35:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:35:45 localhost podman[88356]: 2025-10-05 08:35:45.918132825 +0000 UTC m=+0.080469925 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git) Oct 5 04:35:46 localhost podman[88356]: 2025-10-05 08:35:46.098976152 +0000 UTC m=+0.261313182 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:35:46 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:35:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:35:56 localhost podman[88449]: 2025-10-05 08:35:56.960369391 +0000 UTC m=+0.099406665 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:35:56 localhost podman[88443]: 2025-10-05 08:35:56.97347625 +0000 UTC m=+0.125989233 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, distribution-scope=public, release=1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:35:57 localhost podman[88432]: 2025-10-05 08:35:56.942176253 +0000 UTC m=+0.095574811 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.33.12, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-iscsid-container) Oct 5 04:35:57 localhost podman[88429]: 2025-10-05 08:35:56.923646335 +0000 UTC m=+0.087683234 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:28:53, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:35:57 localhost podman[88431]: 2025-10-05 08:35:57.057616406 +0000 UTC m=+0.213536272 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:35:57 localhost podman[88429]: 2025-10-05 08:35:57.061027679 +0000 UTC m=+0.225064588 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:35:57 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88431]: 2025-10-05 08:35:57.085060288 +0000 UTC m=+0.240980174 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33) Oct 5 04:35:57 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88430]: 2025-10-05 08:35:57.042965994 +0000 UTC m=+0.205990035 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 04:35:57 localhost podman[88444]: 2025-10-05 08:35:57.041849864 +0000 UTC m=+0.186646066 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.9, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, release=2, architecture=x86_64) Oct 5 04:35:57 localhost podman[88449]: 2025-10-05 08:35:57.106440513 +0000 UTC m=+0.245477797 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:35:57 localhost podman[88430]: 2025-10-05 08:35:57.121309431 +0000 UTC m=+0.284333492 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:35:57 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:35:57 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88432]: 2025-10-05 08:35:57.157118942 +0000 UTC m=+0.310517490 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:35:57 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88444]: 2025-10-05 08:35:57.172617477 +0000 UTC m=+0.317413669 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, release=2, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:04:03, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:35:57 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88462]: 2025-10-05 08:35:57.15703688 +0000 UTC m=+0.296771193 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, distribution-scope=public) Oct 5 04:35:57 localhost podman[88462]: 2025-10-05 08:35:57.240195599 +0000 UTC m=+0.379929962 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.9, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:35:57 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:35:57 localhost podman[88443]: 2025-10-05 08:35:57.307159424 +0000 UTC m=+0.459672467 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:35:57 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:36:06 localhost systemd[1]: tmp-crun.5NnD6T.mount: Deactivated successfully. Oct 5 04:36:06 localhost podman[88607]: 2025-10-05 08:36:06.907871155 +0000 UTC m=+0.080190819 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, release=1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container) Oct 5 04:36:06 localhost podman[88607]: 2025-10-05 08:36:06.965924796 +0000 UTC m=+0.138244460 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:36:06 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:36:16 localhost systemd[1]: tmp-crun.5WoB3l.mount: Deactivated successfully. Oct 5 04:36:16 localhost podman[88631]: 2025-10-05 08:36:16.909203144 +0000 UTC m=+0.079023987 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T13:07:59, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 04:36:17 localhost podman[88631]: 2025-10-05 08:36:17.070809854 +0000 UTC m=+0.240630757 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, release=1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:36:17 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:36:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:36:27 localhost podman[88674]: 2025-10-05 08:36:27.965093943 +0000 UTC m=+0.113190484 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public) Oct 5 04:36:28 localhost podman[88669]: 2025-10-05 08:36:28.006319772 +0000 UTC m=+0.163223894 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64) Oct 5 04:36:28 localhost podman[88669]: 2025-10-05 08:36:28.014288331 +0000 UTC m=+0.171192473 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 5 04:36:28 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88662]: 2025-10-05 08:36:27.941388253 +0000 UTC m=+0.104827704 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vcs-type=git) Oct 5 04:36:28 localhost podman[88695]: 2025-10-05 08:36:28.015976857 +0000 UTC m=+0.143775182 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:36:28 localhost podman[88662]: 2025-10-05 08:36:28.071569771 +0000 UTC m=+0.235009222 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, container_name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:36:28 localhost podman[88661]: 2025-10-05 08:36:28.077938815 +0000 UTC m=+0.246067965 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, version=17.1.9) Oct 5 04:36:28 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88695]: 2025-10-05 08:36:28.093902803 +0000 UTC m=+0.221701158 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Oct 5 04:36:28 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88661]: 2025-10-05 08:36:28.132235423 +0000 UTC m=+0.300364573 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:36:28 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88687]: 2025-10-05 08:36:28.142813663 +0000 UTC m=+0.286402040 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, version=17.1.9, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:36:28 localhost podman[88681]: 2025-10-05 08:36:28.168179998 +0000 UTC m=+0.303891769 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, vcs-type=git, release=2, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:36:28 localhost podman[88663]: 2025-10-05 08:36:28.197705317 +0000 UTC m=+0.357708404 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, release=1, build-date=2025-07-21T14:45:33, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute) Oct 5 04:36:28 localhost podman[88663]: 2025-10-05 08:36:28.221014106 +0000 UTC m=+0.381017193 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:36:28 localhost podman[88687]: 2025-10-05 08:36:28.229402596 +0000 UTC m=+0.372990973 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, distribution-scope=public, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:36:28 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:36:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88681]: 2025-10-05 08:36:28.280572108 +0000 UTC m=+0.416283949 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, release=2, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.9) Oct 5 04:36:28 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:36:28 localhost podman[88674]: 2025-10-05 08:36:28.341475017 +0000 UTC m=+0.489571618 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37) Oct 5 04:36:28 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:36:28 localhost systemd[1]: tmp-crun.pdfK9V.mount: Deactivated successfully. Oct 5 04:36:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:36:37 localhost systemd[1]: tmp-crun.zJ1x1e.mount: Deactivated successfully. Oct 5 04:36:37 localhost podman[88970]: 2025-10-05 08:36:37.927011203 +0000 UTC m=+0.090954094 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:36:37 localhost podman[88970]: 2025-10-05 08:36:37.962011422 +0000 UTC m=+0.125954353 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step5, architecture=x86_64) Oct 5 04:36:37 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:36:47 localhost podman[89018]: 2025-10-05 08:36:47.911215023 +0000 UTC m=+0.079746676 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:07:59, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Oct 5 04:36:48 localhost podman[89018]: 2025-10-05 08:36:48.106024231 +0000 UTC m=+0.274555934 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, vcs-type=git, build-date=2025-07-21T13:07:59, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:36:48 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:36:58 localhost systemd[1]: tmp-crun.x69eaO.mount: Deactivated successfully. Oct 5 04:36:58 localhost podman[89073]: 2025-10-05 08:36:58.924703398 +0000 UTC m=+0.087266432 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:36:58 localhost podman[89070]: 2025-10-05 08:36:58.957048355 +0000 UTC m=+0.128809281 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:36:58 localhost podman[89080]: 2025-10-05 08:36:58.910034126 +0000 UTC m=+0.072697403 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:36:58 localhost podman[89093]: 2025-10-05 08:36:58.964803427 +0000 UTC m=+0.119712651 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1) Oct 5 04:36:58 localhost podman[89070]: 2025-10-05 08:36:58.978945245 +0000 UTC m=+0.150706171 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:36:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89085]: 2025-10-05 08:36:59.01927899 +0000 UTC m=+0.178726299 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc.) Oct 5 04:36:59 localhost podman[89085]: 2025-10-05 08:36:59.028351489 +0000 UTC m=+0.187798788 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:36:59 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89093]: 2025-10-05 08:36:59.063112052 +0000 UTC m=+0.218021266 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:36:59 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89071]: 2025-10-05 08:36:59.028970105 +0000 UTC m=+0.196825764 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:36:59 localhost podman[89080]: 2025-10-05 08:36:59.094602634 +0000 UTC m=+0.257265901 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, name=rhosp17/openstack-collectd, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:36:59 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89071]: 2025-10-05 08:36:59.112102374 +0000 UTC m=+0.279958053 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller) Oct 5 04:36:59 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89078]: 2025-10-05 08:36:59.069807915 +0000 UTC m=+0.229361637 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, release=1, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:36:59 localhost podman[89073]: 2025-10-05 08:36:59.131273969 +0000 UTC m=+0.293836993 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:36:59 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89072]: 2025-10-05 08:36:59.020410541 +0000 UTC m=+0.187238721 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 04:36:59 localhost podman[89072]: 2025-10-05 08:36:59.206146661 +0000 UTC m=+0.372974841 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64) Oct 5 04:36:59 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:36:59 localhost podman[89078]: 2025-10-05 08:36:59.405620137 +0000 UTC m=+0.565173929 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, release=1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:36:59 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:37:08 localhost systemd[1]: tmp-crun.5LlMAz.mount: Deactivated successfully. Oct 5 04:37:08 localhost podman[89247]: 2025-10-05 08:37:08.905105786 +0000 UTC m=+0.077575017 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 04:37:08 localhost podman[89247]: 2025-10-05 08:37:08.929999188 +0000 UTC m=+0.102468379 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:37:08 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:37:18 localhost systemd[1]: tmp-crun.ePQcGx.mount: Deactivated successfully. Oct 5 04:37:18 localhost podman[89273]: 2025-10-05 08:37:18.907670479 +0000 UTC m=+0.080343212 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 04:37:19 localhost podman[89273]: 2025-10-05 08:37:19.109228602 +0000 UTC m=+0.281901345 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, version=17.1.9, managed_by=tripleo_ansible, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:37:19 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:37:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:37:29 localhost recover_tripleo_nova_virtqemud[89361]: 63458 Oct 5 04:37:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:37:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:37:29 localhost systemd[1]: tmp-crun.51FfVR.mount: Deactivated successfully. Oct 5 04:37:29 localhost podman[89317]: 2025-10-05 08:37:29.954180671 +0000 UTC m=+0.106514471 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1) Oct 5 04:37:29 localhost podman[89304]: 2025-10-05 08:37:29.933283778 +0000 UTC m=+0.098277505 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, version=17.1.9, container_name=ovn_controller, release=1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 04:37:29 localhost podman[89330]: 2025-10-05 08:37:29.991833362 +0000 UTC m=+0.140522562 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=logrotate_crond) Oct 5 04:37:30 localhost podman[89330]: 2025-10-05 08:37:30.028042035 +0000 UTC m=+0.176731255 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:37:30 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89305]: 2025-10-05 08:37:30.043938631 +0000 UTC m=+0.207604601 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:37:30 localhost podman[89304]: 2025-10-05 08:37:30.063858356 +0000 UTC m=+0.228852083 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, architecture=x86_64) Oct 5 04:37:30 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89305]: 2025-10-05 08:37:30.098465164 +0000 UTC m=+0.262131104 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, container_name=ceilometer_agent_compute, version=17.1.9, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:37:30 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89303]: 2025-10-05 08:37:30.139562671 +0000 UTC m=+0.305925035 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:37:30 localhost podman[89336]: 2025-10-05 08:37:30.199488663 +0000 UTC m=+0.340786460 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1) Oct 5 04:37:30 localhost podman[89303]: 2025-10-05 08:37:30.202493805 +0000 UTC m=+0.368856219 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9) Oct 5 04:37:30 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89323]: 2025-10-05 08:37:30.248909677 +0000 UTC m=+0.398004558 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:37:30 localhost podman[89323]: 2025-10-05 08:37:30.255672632 +0000 UTC m=+0.404767533 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, release=2, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, version=17.1.9, distribution-scope=public) Oct 5 04:37:30 localhost podman[89336]: 2025-10-05 08:37:30.263075385 +0000 UTC m=+0.404373192 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9) Oct 5 04:37:30 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:37:30 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89306]: 2025-10-05 08:37:30.316974512 +0000 UTC m=+0.475097820 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true) Oct 5 04:37:30 localhost podman[89317]: 2025-10-05 08:37:30.333176496 +0000 UTC m=+0.485510356 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Oct 5 04:37:30 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:37:30 localhost podman[89306]: 2025-10-05 08:37:30.349193976 +0000 UTC m=+0.507317304 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-07-21T13:27:15, tcib_managed=true) Oct 5 04:37:30 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:37:30 localhost systemd[1]: tmp-crun.J9UDmT.mount: Deactivated successfully. Oct 5 04:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:37:39 localhost podman[89563]: 2025-10-05 08:37:39.92698555 +0000 UTC m=+0.090166273 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, config_id=tripleo_step5, distribution-scope=public, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1) Oct 5 04:37:39 localhost podman[89563]: 2025-10-05 08:37:39.984329791 +0000 UTC m=+0.147510514 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-compute-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:37:39 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:37:49 localhost systemd[1]: tmp-crun.VdiKfy.mount: Deactivated successfully. Oct 5 04:37:49 localhost podman[89611]: 2025-10-05 08:37:49.940721419 +0000 UTC m=+0.106057718 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:37:50 localhost podman[89611]: 2025-10-05 08:37:50.131120476 +0000 UTC m=+0.296456745 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:37:50 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:38:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:38:00 localhost systemd[1]: tmp-crun.Ou54p5.mount: Deactivated successfully. Oct 5 04:38:00 localhost podman[89665]: 2025-10-05 08:38:00.990026996 +0000 UTC m=+0.134931309 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1) Oct 5 04:38:01 localhost podman[89655]: 2025-10-05 08:38:00.956418805 +0000 UTC m=+0.098200972 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, release=2, config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03) Oct 5 04:38:01 localhost podman[89643]: 2025-10-05 08:38:01.037323672 +0000 UTC m=+0.192620839 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, distribution-scope=public, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:01 localhost podman[89655]: 2025-10-05 08:38:01.092195326 +0000 UTC m=+0.233977473 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step3, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:38:01 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89642]: 2025-10-05 08:38:01.108574545 +0000 UTC m=+0.258636279 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.9) Oct 5 04:38:01 localhost podman[89660]: 2025-10-05 08:38:01.059207912 +0000 UTC m=+0.206635424 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, name=rhosp17/openstack-cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4) Oct 5 04:38:01 localhost podman[89641]: 2025-10-05 08:38:01.089029669 +0000 UTC m=+0.250935818 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, tcib_managed=true, container_name=ovn_controller, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Oct 5 04:38:01 localhost podman[89660]: 2025-10-05 08:38:01.144152879 +0000 UTC m=+0.291580381 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:38:01 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89644]: 2025-10-05 08:38:01.155005647 +0000 UTC m=+0.301749870 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target) Oct 5 04:38:01 localhost podman[89640]: 2025-10-05 08:38:01.193136302 +0000 UTC m=+0.355081292 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:38:01 localhost podman[89665]: 2025-10-05 08:38:01.218559468 +0000 UTC m=+0.363463801 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi) Oct 5 04:38:01 localhost podman[89643]: 2025-10-05 08:38:01.228889072 +0000 UTC m=+0.384186279 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git) Oct 5 04:38:01 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89640]: 2025-10-05 08:38:01.239241345 +0000 UTC m=+0.401186325 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, release=1, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 5 04:38:01 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:38:01 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89642]: 2025-10-05 08:38:01.269149305 +0000 UTC m=+0.419211059 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute) Oct 5 04:38:01 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89641]: 2025-10-05 08:38:01.320653386 +0000 UTC m=+0.482559485 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, container_name=ovn_controller, release=1, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1) Oct 5 04:38:01 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:38:01 localhost podman[89644]: 2025-10-05 08:38:01.489257687 +0000 UTC m=+0.636001920 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=nova_migration_target, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step4) Oct 5 04:38:01 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:38:10 localhost podman[89819]: 2025-10-05 08:38:10.904014971 +0000 UTC m=+0.076692432 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, version=17.1.9, container_name=nova_compute, release=1, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:38:10 localhost podman[89819]: 2025-10-05 08:38:10.934046085 +0000 UTC m=+0.106723586 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:38:10 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:38:20 localhost systemd[1]: tmp-crun.V8pslB.mount: Deactivated successfully. Oct 5 04:38:20 localhost podman[89845]: 2025-10-05 08:38:20.913484093 +0000 UTC m=+0.082588823 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, version=17.1.9) Oct 5 04:38:21 localhost podman[89845]: 2025-10-05 08:38:21.120209019 +0000 UTC m=+0.289313739 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Oct 5 04:38:21 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:38:31 localhost systemd[1]: tmp-crun.D9eA7d.mount: Deactivated successfully. Oct 5 04:38:31 localhost podman[89901]: 2025-10-05 08:38:31.959155412 +0000 UTC m=+0.099776444 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:38:31 localhost podman[89883]: 2025-10-05 08:38:31.992057804 +0000 UTC m=+0.140544162 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, version=17.1.9, io.openshift.expose-services=, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:32 localhost podman[89889]: 2025-10-05 08:38:32.008301109 +0000 UTC m=+0.161625820 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, release=2, container_name=collectd, version=17.1.9, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Oct 5 04:38:32 localhost podman[89874]: 2025-10-05 08:38:31.931419462 +0000 UTC m=+0.098955682 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:38:32 localhost podman[89889]: 2025-10-05 08:38:32.040125661 +0000 UTC m=+0.193450372 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-collectd, release=2, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 5 04:38:32 localhost podman[89875]: 2025-10-05 08:38:32.045862138 +0000 UTC m=+0.208950026 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44) Oct 5 04:38:32 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89874]: 2025-10-05 08:38:32.060900531 +0000 UTC m=+0.228436751 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:38:32 localhost podman[89875]: 2025-10-05 08:38:32.069159547 +0000 UTC m=+0.232247485 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:38:32 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89876]: 2025-10-05 08:38:32.105495123 +0000 UTC m=+0.265578639 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, release=1, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33) Oct 5 04:38:32 localhost podman[89901]: 2025-10-05 08:38:32.134086556 +0000 UTC m=+0.274707638 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, version=17.1.9, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:32 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89895]: 2025-10-05 08:38:32.152449429 +0000 UTC m=+0.302157841 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:38:32 localhost podman[89895]: 2025-10-05 08:38:32.162363881 +0000 UTC m=+0.312072113 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:32 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:38:32 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89876]: 2025-10-05 08:38:32.186322947 +0000 UTC m=+0.346406503 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute) Oct 5 04:38:32 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89880]: 2025-10-05 08:38:32.202552172 +0000 UTC m=+0.360649644 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, release=1) Oct 5 04:38:32 localhost podman[89880]: 2025-10-05 08:38:32.215140138 +0000 UTC m=+0.373237640 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, container_name=iscsid) Oct 5 04:38:32 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:38:32 localhost podman[89883]: 2025-10-05 08:38:32.30062022 +0000 UTC m=+0.449106618 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 04:38:32 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:38:35 localhost podman[90153]: 2025-10-05 08:38:35.176665253 +0000 UTC m=+0.083784017 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main) Oct 5 04:38:35 localhost podman[90153]: 2025-10-05 08:38:35.299286383 +0000 UTC m=+0.206405147 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:38:41 localhost podman[90295]: 2025-10-05 08:38:41.926622264 +0000 UTC m=+0.081969657 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:38:41 localhost podman[90295]: 2025-10-05 08:38:41.956509923 +0000 UTC m=+0.111857326 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:38:41 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:38:51 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:38:51 localhost recover_tripleo_nova_virtqemud[90350]: 63458 Oct 5 04:38:51 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:38:51 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:38:51 localhost podman[90343]: 2025-10-05 08:38:51.938697109 +0000 UTC m=+0.101961915 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:38:52 localhost podman[90343]: 2025-10-05 08:38:52.142090413 +0000 UTC m=+0.305355179 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Oct 5 04:38:52 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:39:02 localhost podman[90396]: 2025-10-05 08:39:02.946710735 +0000 UTC m=+0.094272664 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, io.openshift.expose-services=) Oct 5 04:39:02 localhost systemd[1]: tmp-crun.GKxeus.mount: Deactivated successfully. Oct 5 04:39:02 localhost podman[90389]: 2025-10-05 08:39:02.930251364 +0000 UTC m=+0.081483294 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, tcib_managed=true, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12) Oct 5 04:39:03 localhost podman[90396]: 2025-10-05 08:39:03.033145984 +0000 UTC m=+0.180707833 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:39:03 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90376]: 2025-10-05 08:39:03.04285575 +0000 UTC m=+0.205654357 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true) Oct 5 04:39:03 localhost podman[90376]: 2025-10-05 08:39:03.0691212 +0000 UTC m=+0.231919867 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:39:03 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90375]: 2025-10-05 08:39:03.085181379 +0000 UTC m=+0.249181669 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9) Oct 5 04:39:03 localhost podman[90377]: 2025-10-05 08:39:02.983515824 +0000 UTC m=+0.143293548 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1) Oct 5 04:39:03 localhost podman[90377]: 2025-10-05 08:39:03.120188759 +0000 UTC m=+0.279966533 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:39:03 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90375]: 2025-10-05 08:39:03.1530592 +0000 UTC m=+0.317059490 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, container_name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 04:39:03 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90378]: 2025-10-05 08:39:03.197514188 +0000 UTC m=+0.348275885 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, vcs-type=git) Oct 5 04:39:03 localhost podman[90378]: 2025-10-05 08:39:03.23516778 +0000 UTC m=+0.385929467 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:39:03 localhost podman[90379]: 2025-10-05 08:39:03.244033103 +0000 UTC m=+0.400605379 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:39:03 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90389]: 2025-10-05 08:39:03.266966861 +0000 UTC m=+0.418198741 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Oct 5 04:39:03 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90380]: 2025-10-05 08:39:03.346695256 +0000 UTC m=+0.502598214 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=2, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, name=rhosp17/openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 04:39:03 localhost podman[90380]: 2025-10-05 08:39:03.380228455 +0000 UTC m=+0.536131453 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:04:03, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2) Oct 5 04:39:03 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:39:03 localhost podman[90379]: 2025-10-05 08:39:03.574055537 +0000 UTC m=+0.730627823 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container) Oct 5 04:39:03 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:39:12 localhost systemd[1]: tmp-crun.LHVBLe.mount: Deactivated successfully. Oct 5 04:39:12 localhost podman[90551]: 2025-10-05 08:39:12.91718218 +0000 UTC m=+0.084337904 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Oct 5 04:39:12 localhost podman[90551]: 2025-10-05 08:39:12.947219553 +0000 UTC m=+0.114375277 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:39:12 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:39:22 localhost podman[90578]: 2025-10-05 08:39:22.904614459 +0000 UTC m=+0.073714371 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, config_id=tripleo_step1) Oct 5 04:39:23 localhost podman[90578]: 2025-10-05 08:39:23.118350096 +0000 UTC m=+0.287450058 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:39:23 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:39:33 localhost podman[90626]: 2025-10-05 08:39:33.937789205 +0000 UTC m=+0.091876649 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:39:33 localhost systemd[1]: tmp-crun.rmiJY5.mount: Deactivated successfully. Oct 5 04:39:34 localhost podman[90607]: 2025-10-05 08:39:34.023332459 +0000 UTC m=+0.194043639 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent) Oct 5 04:39:34 localhost podman[90608]: 2025-10-05 08:39:33.976746333 +0000 UTC m=+0.144930384 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:39:34 localhost podman[90612]: 2025-10-05 08:39:34.030009782 +0000 UTC m=+0.194058000 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, release=1, config_id=tripleo_step4, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:39:34 localhost podman[90618]: 2025-10-05 08:39:33.994030466 +0000 UTC m=+0.147342639 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, release=2, vendor=Red Hat, Inc., container_name=collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container) Oct 5 04:39:34 localhost podman[90608]: 2025-10-05 08:39:34.059057118 +0000 UTC m=+0.227241179 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:39:34 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90626]: 2025-10-05 08:39:34.07553068 +0000 UTC m=+0.229618104 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 04:39:34 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90623]: 2025-10-05 08:39:34.127912344 +0000 UTC m=+0.284798535 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:39:34 localhost podman[90618]: 2025-10-05 08:39:34.130754982 +0000 UTC m=+0.284067215 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=2, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:39:34 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90610]: 2025-10-05 08:39:34.175943851 +0000 UTC m=+0.339311440 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:39:34 localhost podman[90610]: 2025-10-05 08:39:34.185158334 +0000 UTC m=+0.348525973 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 04:39:34 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90609]: 2025-10-05 08:39:34.235729589 +0000 UTC m=+0.398850621 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:39:34 localhost podman[90623]: 2025-10-05 08:39:34.236436199 +0000 UTC m=+0.393322390 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:39:34 localhost podman[90607]: 2025-10-05 08:39:34.25689432 +0000 UTC m=+0.427605510 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:39:34 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:39:34 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90609]: 2025-10-05 08:39:34.341651782 +0000 UTC m=+0.504772844 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, release=1, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64) Oct 5 04:39:34 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:39:34 localhost podman[90612]: 2025-10-05 08:39:34.423496445 +0000 UTC m=+0.587544663 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, name=rhosp17/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:39:34 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:39:43 localhost podman[90862]: 2025-10-05 08:39:43.916935906 +0000 UTC m=+0.083292774 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step5, container_name=nova_compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:39:43 localhost podman[90862]: 2025-10-05 08:39:43.948153091 +0000 UTC m=+0.114509949 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, config_id=tripleo_step5, vcs-type=git, maintainer=OpenStack TripleO Team) Oct 5 04:39:43 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:39:53 localhost podman[90910]: 2025-10-05 08:39:53.924179976 +0000 UTC m=+0.094345787 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.33.12, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:39:54 localhost podman[90910]: 2025-10-05 08:39:54.12428956 +0000 UTC m=+0.294455381 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:39:54 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:40:04 localhost systemd[1]: tmp-crun.j3xBE8.mount: Deactivated successfully. Oct 5 04:40:04 localhost podman[90975]: 2025-10-05 08:40:04.959486329 +0000 UTC m=+0.094410747 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47) Oct 5 04:40:04 localhost podman[90941]: 2025-10-05 08:40:04.992613367 +0000 UTC m=+0.158816973 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4) Oct 5 04:40:05 localhost podman[90962]: 2025-10-05 08:40:05.005375467 +0000 UTC m=+0.149878778 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:40:05 localhost podman[90943]: 2025-10-05 08:40:04.94199291 +0000 UTC m=+0.100542436 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, vendor=Red Hat, Inc., distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33) Oct 5 04:40:05 localhost podman[90956]: 2025-10-05 08:40:05.045899787 +0000 UTC m=+0.194615343 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:40:05 localhost podman[90941]: 2025-10-05 08:40:05.058320878 +0000 UTC m=+0.224524504 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_id=tripleo_step4) Oct 5 04:40:05 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:40:05 localhost podman[90942]: 2025-10-05 08:40:05.103230589 +0000 UTC m=+0.266060102 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller) Oct 5 04:40:05 localhost podman[90962]: 2025-10-05 08:40:05.117060898 +0000 UTC m=+0.261564269 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, distribution-scope=public, tcib_managed=true) Oct 5 04:40:05 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:40:05 localhost podman[90975]: 2025-10-05 08:40:05.139147772 +0000 UTC m=+0.274072180 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:40:05 localhost podman[90942]: 2025-10-05 08:40:05.156237571 +0000 UTC m=+0.319067134 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, container_name=ovn_controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:40:05 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:40:05 localhost podman[90968]: 2025-10-05 08:40:05.16714683 +0000 UTC m=+0.305238516 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:07:52, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, version=17.1.9) Oct 5 04:40:05 localhost podman[90943]: 2025-10-05 08:40:05.173012281 +0000 UTC m=+0.331561767 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 5 04:40:05 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:40:05 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:40:05 localhost podman[90949]: 2025-10-05 08:40:05.254885055 +0000 UTC m=+0.408936388 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Oct 5 04:40:05 localhost podman[90968]: 2025-10-05 08:40:05.277166285 +0000 UTC m=+0.415257961 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12) Oct 5 04:40:05 localhost podman[90949]: 2025-10-05 08:40:05.289317258 +0000 UTC m=+0.443368541 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, version=17.1.9, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:40:05 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:40:05 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:40:05 localhost podman[90956]: 2025-10-05 08:40:05.398182951 +0000 UTC m=+0.546898517 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, version=17.1.9) Oct 5 04:40:05 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:40:05 localhost systemd[1]: tmp-crun.Vzmuwy.mount: Deactivated successfully. Oct 5 04:40:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:40:14 localhost podman[91121]: 2025-10-05 08:40:14.902662586 +0000 UTC m=+0.075177671 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute) Oct 5 04:40:14 localhost podman[91121]: 2025-10-05 08:40:14.933647685 +0000 UTC m=+0.106162770 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, container_name=nova_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:40:14 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:40:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:40:24 localhost systemd[1]: tmp-crun.TbvsLM.mount: Deactivated successfully. Oct 5 04:40:24 localhost podman[91147]: 2025-10-05 08:40:24.9095929 +0000 UTC m=+0.079975723 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:40:25 localhost podman[91147]: 2025-10-05 08:40:25.094316832 +0000 UTC m=+0.264699645 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:40:25 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:40:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:40:35 localhost systemd[1]: tmp-crun.tMy0G0.mount: Deactivated successfully. Oct 5 04:40:35 localhost podman[91175]: 2025-10-05 08:40:35.932772711 +0000 UTC m=+0.100106864 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:40:35 localhost podman[91176]: 2025-10-05 08:40:35.995393657 +0000 UTC m=+0.151061631 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T13:28:44, container_name=ovn_controller) Oct 5 04:40:36 localhost podman[91175]: 2025-10-05 08:40:36.011852408 +0000 UTC m=+0.179186561 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Oct 5 04:40:36 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:40:36 localhost podman[91176]: 2025-10-05 08:40:36.040345799 +0000 UTC m=+0.196013793 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller) Oct 5 04:40:36 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:40:36 localhost podman[91186]: 2025-10-05 08:40:36.053257343 +0000 UTC m=+0.205688398 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, container_name=collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:40:36 localhost podman[91201]: 2025-10-05 08:40:36.09585656 +0000 UTC m=+0.239893715 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:40:36 localhost podman[91196]: 2025-10-05 08:40:36.110753638 +0000 UTC m=+0.252537321 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1) Oct 5 04:40:36 localhost podman[91186]: 2025-10-05 08:40:36.137423669 +0000 UTC m=+0.289854744 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, release=2) Oct 5 04:40:36 localhost podman[91201]: 2025-10-05 08:40:36.144922204 +0000 UTC m=+0.288959359 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public) Oct 5 04:40:36 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:40:36 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:40:36 localhost podman[91184]: 2025-10-05 08:40:35.961115578 +0000 UTC m=+0.103315112 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, release=1) Oct 5 04:40:36 localhost podman[91177]: 2025-10-05 08:40:35.985278599 +0000 UTC m=+0.143585025 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:40:36 localhost podman[91178]: 2025-10-05 08:40:36.147005562 +0000 UTC m=+0.292867996 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1, config_id=tripleo_step3, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:40:36 localhost podman[91177]: 2025-10-05 08:40:36.21701425 +0000 UTC m=+0.375320646 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1, version=17.1.9, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:40:36 localhost podman[91178]: 2025-10-05 08:40:36.226397508 +0000 UTC m=+0.372259982 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-iscsid) Oct 5 04:40:36 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:40:36 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:40:36 localhost podman[91196]: 2025-10-05 08:40:36.271019 +0000 UTC m=+0.412802683 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., tcib_managed=true, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, version=17.1.9, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:40:36 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:40:36 localhost podman[91184]: 2025-10-05 08:40:36.303277154 +0000 UTC m=+0.445476748 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:40:36 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:40:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:40:45 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:40:45 localhost recover_tripleo_nova_virtqemud[91426]: 63458 Oct 5 04:40:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:40:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:40:45 localhost podman[91424]: 2025-10-05 08:40:45.90231066 +0000 UTC m=+0.071977784 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 5 04:40:45 localhost podman[91424]: 2025-10-05 08:40:45.929806992 +0000 UTC m=+0.099474086 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, config_id=tripleo_step5, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, distribution-scope=public, tcib_managed=true) Oct 5 04:40:45 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:40:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:40:55 localhost podman[91473]: 2025-10-05 08:40:55.922453664 +0000 UTC m=+0.088495536 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:40:56 localhost podman[91473]: 2025-10-05 08:40:56.109896281 +0000 UTC m=+0.275938143 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, tcib_managed=true, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=) Oct 5 04:40:56 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:41:06 localhost podman[91527]: 2025-10-05 08:41:06.959153667 +0000 UTC m=+0.097808842 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:41:06 localhost podman[91527]: 2025-10-05 08:41:06.974031434 +0000 UTC m=+0.112686609 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:41:06 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:41:06 localhost podman[91503]: 2025-10-05 08:41:06.936840505 +0000 UTC m=+0.099851718 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:41:07 localhost podman[91504]: 2025-10-05 08:41:06.950596172 +0000 UTC m=+0.112169355 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:41:07 localhost podman[91511]: 2025-10-05 08:41:07.05012743 +0000 UTC m=+0.197070633 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 5 04:41:07 localhost podman[91506]: 2025-10-05 08:41:07.02862095 +0000 UTC m=+0.184867917 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:41:07 localhost podman[91523]: 2025-10-05 08:41:07.093658182 +0000 UTC m=+0.240359458 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:41:07 localhost podman[91523]: 2025-10-05 08:41:07.100397248 +0000 UTC m=+0.247098554 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:41:07 localhost podman[91506]: 2025-10-05 08:41:07.107210494 +0000 UTC m=+0.263457501 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, container_name=iscsid, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:41:07 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:41:07 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:41:07 localhost podman[91504]: 2025-10-05 08:41:07.130620375 +0000 UTC m=+0.292193618 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, build-date=2025-07-21T14:45:33, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:41:07 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:41:07 localhost podman[91517]: 2025-10-05 08:41:07.141155284 +0000 UTC m=+0.288420595 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-collectd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, distribution-scope=public, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:41:07 localhost podman[91503]: 2025-10-05 08:41:07.167814845 +0000 UTC m=+0.330826068 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release=1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:41:07 localhost podman[91502]: 2025-10-05 08:41:07.172673137 +0000 UTC m=+0.339715040 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, distribution-scope=public, release=1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:41:07 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:41:07 localhost podman[91517]: 2025-10-05 08:41:07.224986621 +0000 UTC m=+0.372251972 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 04:41:07 localhost podman[91502]: 2025-10-05 08:41:07.233302389 +0000 UTC m=+0.400344362 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true) Oct 5 04:41:07 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:41:07 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:41:07 localhost podman[91511]: 2025-10-05 08:41:07.409131408 +0000 UTC m=+0.556074611 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 04:41:07 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:41:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:41:16 localhost systemd[1]: tmp-crun.DGxqjN.mount: Deactivated successfully. Oct 5 04:41:16 localhost podman[91679]: 2025-10-05 08:41:16.925671773 +0000 UTC m=+0.094118211 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.expose-services=) Oct 5 04:41:16 localhost podman[91679]: 2025-10-05 08:41:16.956277062 +0000 UTC m=+0.124723490 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:41:16 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:41:23 localhost sshd[91705]: main: sshd: ssh-rsa algorithm is disabled Oct 5 04:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:41:26 localhost podman[91706]: 2025-10-05 08:41:26.915359452 +0000 UTC m=+0.081996617 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:41:27 localhost podman[91706]: 2025-10-05 08:41:27.109156873 +0000 UTC m=+0.275794058 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, version=17.1.9, config_id=tripleo_step1) Oct 5 04:41:27 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:41:37 localhost podman[91737]: 2025-10-05 08:41:37.934216516 +0000 UTC m=+0.092680622 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:41:37 localhost podman[91762]: 2025-10-05 08:41:37.974503691 +0000 UTC m=+0.121437640 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:41:37 localhost podman[91737]: 2025-10-05 08:41:37.980220008 +0000 UTC m=+0.138684073 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:41:37 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91762]: 2025-10-05 08:41:38.009025797 +0000 UTC m=+0.155959686 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, distribution-scope=public, architecture=x86_64, tcib_managed=true, container_name=logrotate_crond) Oct 5 04:41:38 localhost podman[91735]: 2025-10-05 08:41:38.029892578 +0000 UTC m=+0.192669670 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, release=1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:41:38 localhost podman[91738]: 2025-10-05 08:41:37.986983413 +0000 UTC m=+0.143686419 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, config_id=tripleo_step3, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container) Oct 5 04:41:38 localhost podman[91735]: 2025-10-05 08:41:38.051907432 +0000 UTC m=+0.214684524 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:41:38 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91738]: 2025-10-05 08:41:38.068046314 +0000 UTC m=+0.224749340 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3) Oct 5 04:41:38 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:41:38 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91767]: 2025-10-05 08:41:38.147610374 +0000 UTC m=+0.291174020 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:41:38 localhost podman[91750]: 2025-10-05 08:41:38.105096759 +0000 UTC m=+0.254931587 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, container_name=collectd, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:41:38 localhost podman[91736]: 2025-10-05 08:41:38.1197378 +0000 UTC m=+0.280471227 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:41:38 localhost podman[91767]: 2025-10-05 08:41:38.174983384 +0000 UTC m=+0.318547040 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, distribution-scope=public) Oct 5 04:41:38 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91750]: 2025-10-05 08:41:38.188200837 +0000 UTC m=+0.338035675 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, release=2) Oct 5 04:41:38 localhost podman[91736]: 2025-10-05 08:41:38.201147451 +0000 UTC m=+0.361880888 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:28:44, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller) Oct 5 04:41:38 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:41:38 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:41:38 localhost podman[91744]: 2025-10-05 08:41:38.247410919 +0000 UTC m=+0.399392716 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vcs-type=git, io.openshift.expose-services=, container_name=nova_migration_target) Oct 5 04:41:38 localhost podman[91744]: 2025-10-05 08:41:38.645314243 +0000 UTC m=+0.797296020 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37) Oct 5 04:41:38 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:41:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:41:47 localhost podman[91993]: 2025-10-05 08:41:47.912358031 +0000 UTC m=+0.080362304 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:41:47 localhost podman[91993]: 2025-10-05 08:41:47.944127151 +0000 UTC m=+0.112131414 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Oct 5 04:41:47 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:41:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:41:57 localhost systemd[1]: tmp-crun.xnXvbt.mount: Deactivated successfully. Oct 5 04:41:57 localhost podman[92019]: 2025-10-05 08:41:57.922418839 +0000 UTC m=+0.085721870 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:41:58 localhost podman[92019]: 2025-10-05 08:41:58.113581337 +0000 UTC m=+0.276884368 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, release=1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd) Oct 5 04:41:58 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:42:08 localhost podman[92067]: 2025-10-05 08:42:08.961021874 +0000 UTC m=+0.102116970 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, version=17.1.9, architecture=x86_64, container_name=collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:42:08 localhost podman[92067]: 2025-10-05 08:42:08.997499103 +0000 UTC m=+0.138594189 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:42:09 localhost systemd[1]: tmp-crun.yYJOHU.mount: Deactivated successfully. Oct 5 04:42:09 localhost podman[92053]: 2025-10-05 08:42:09.015318871 +0000 UTC m=+0.169231948 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:42:09 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92051]: 2025-10-05 08:42:09.054568127 +0000 UTC m=+0.213084650 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:42:09 localhost podman[92051]: 2025-10-05 08:42:09.084123717 +0000 UTC m=+0.242640240 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Oct 5 04:42:09 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92079]: 2025-10-05 08:42:09.172538119 +0000 UTC m=+0.309283185 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Oct 5 04:42:09 localhost podman[92059]: 2025-10-05 08:42:09.176514229 +0000 UTC m=+0.323537357 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:42:09 localhost podman[92079]: 2025-10-05 08:42:09.204862366 +0000 UTC m=+0.341607402 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true) Oct 5 04:42:09 localhost podman[92052]: 2025-10-05 08:42:09.206965303 +0000 UTC m=+0.364152150 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33) Oct 5 04:42:09 localhost podman[92075]: 2025-10-05 08:42:09.224945046 +0000 UTC m=+0.357722024 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, vcs-type=git, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.9) Oct 5 04:42:09 localhost podman[92052]: 2025-10-05 08:42:09.242439245 +0000 UTC m=+0.399626112 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute) Oct 5 04:42:09 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92075]: 2025-10-05 08:42:09.26268358 +0000 UTC m=+0.395460568 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond) Oct 5 04:42:09 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:42:09 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92050]: 2025-10-05 08:42:09.306012668 +0000 UTC m=+0.467823151 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Oct 5 04:42:09 localhost podman[92053]: 2025-10-05 08:42:09.33238986 +0000 UTC m=+0.486302857 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=iscsid, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1, vendor=Red Hat, Inc.) Oct 5 04:42:09 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92050]: 2025-10-05 08:42:09.379150852 +0000 UTC m=+0.540961335 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12) Oct 5 04:42:09 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:42:09 localhost podman[92059]: 2025-10-05 08:42:09.546441626 +0000 UTC m=+0.693464824 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:42:09 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:42:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:42:18 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:42:18 localhost recover_tripleo_nova_virtqemud[92234]: 63458 Oct 5 04:42:18 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:42:18 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:42:18 localhost systemd[1]: tmp-crun.GAzdfu.mount: Deactivated successfully. Oct 5 04:42:18 localhost podman[92232]: 2025-10-05 08:42:18.936802183 +0000 UTC m=+0.103580350 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Oct 5 04:42:18 localhost podman[92232]: 2025-10-05 08:42:18.967302089 +0000 UTC m=+0.134080206 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:42:18 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:42:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:42:28 localhost podman[92261]: 2025-10-05 08:42:28.90987169 +0000 UTC m=+0.078849918 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64) Oct 5 04:42:29 localhost podman[92261]: 2025-10-05 08:42:29.101330892 +0000 UTC m=+0.270309160 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 5 04:42:29 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:42:39 localhost systemd[1]: tmp-crun.xVJ0w1.mount: Deactivated successfully. Oct 5 04:42:39 localhost podman[92317]: 2025-10-05 08:42:39.971617565 +0000 UTC m=+0.109066099 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:42:39 localhost podman[92289]: 2025-10-05 08:42:39.923292807 +0000 UTC m=+0.086873359 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, io.openshift.expose-services=) Oct 5 04:42:40 localhost podman[92290]: 2025-10-05 08:42:39.950437493 +0000 UTC m=+0.111988639 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 04:42:40 localhost podman[92289]: 2025-10-05 08:42:40.004221341 +0000 UTC m=+0.167801913 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, release=1, build-date=2025-07-21T16:28:53, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team) Oct 5 04:42:40 localhost podman[92291]: 2025-10-05 08:42:40.009938598 +0000 UTC m=+0.155690640 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible) Oct 5 04:42:40 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92317]: 2025-10-05 08:42:40.043869971 +0000 UTC m=+0.181318545 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:42:40 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92299]: 2025-10-05 08:42:40.052806736 +0000 UTC m=+0.201202240 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:42:40 localhost podman[92290]: 2025-10-05 08:42:40.084017185 +0000 UTC m=+0.245568341 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:42:40 localhost podman[92307]: 2025-10-05 08:42:40.089915377 +0000 UTC m=+0.231321859 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public) Oct 5 04:42:40 localhost podman[92307]: 2025-10-05 08:42:40.096755184 +0000 UTC m=+0.238161656 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron) Oct 5 04:42:40 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92292]: 2025-10-05 08:42:40.105922336 +0000 UTC m=+0.258092374 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, container_name=iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1) Oct 5 04:42:40 localhost podman[92291]: 2025-10-05 08:42:40.111978632 +0000 UTC m=+0.257730674 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, version=17.1.9, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:42:40 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92292]: 2025-10-05 08:42:40.145977078 +0000 UTC m=+0.298147116 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:42:40 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92305]: 2025-10-05 08:42:40.163426597 +0000 UTC m=+0.308321365 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, version=17.1.9) Oct 5 04:42:40 localhost podman[92305]: 2025-10-05 08:42:40.171867189 +0000 UTC m=+0.316761937 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true) Oct 5 04:42:40 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:42:40 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:42:40 localhost podman[92299]: 2025-10-05 08:42:40.419171896 +0000 UTC m=+0.567567470 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:42:40 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:42:40 localhost systemd[1]: tmp-crun.vLbFI3.mount: Deactivated successfully. Oct 5 04:42:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:42:49 localhost systemd[1]: tmp-crun.JylLzX.mount: Deactivated successfully. Oct 5 04:42:49 localhost podman[92542]: 2025-10-05 08:42:49.928517143 +0000 UTC m=+0.092532374 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:42:49 localhost podman[92542]: 2025-10-05 08:42:49.989437907 +0000 UTC m=+0.153453128 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, container_name=nova_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:42:50 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:42:59 localhost podman[92569]: 2025-10-05 08:42:59.914693555 +0000 UTC m=+0.085584313 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:43:00 localhost podman[92569]: 2025-10-05 08:43:00.11353863 +0000 UTC m=+0.284429368 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:43:00 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:43:10 localhost systemd[1]: tmp-crun.h9K9FX.mount: Deactivated successfully. Oct 5 04:43:10 localhost podman[92619]: 2025-10-05 08:43:10.95719742 +0000 UTC m=+0.101959313 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, distribution-scope=public, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:43:11 localhost podman[92600]: 2025-10-05 08:43:11.007020449 +0000 UTC m=+0.165191931 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-type=git, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:43:11 localhost podman[92598]: 2025-10-05 08:43:11.017476117 +0000 UTC m=+0.181663814 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:28:53, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=) Oct 5 04:43:11 localhost podman[92600]: 2025-10-05 08:43:11.043281326 +0000 UTC m=+0.201452778 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:43:11 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92601]: 2025-10-05 08:43:11.057844936 +0000 UTC m=+0.203869204 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:43:11 localhost podman[92598]: 2025-10-05 08:43:11.061069495 +0000 UTC m=+0.225257202 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:43:11 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92619]: 2025-10-05 08:43:11.096759826 +0000 UTC m=+0.241521729 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:43:11 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92599]: 2025-10-05 08:43:11.107130631 +0000 UTC m=+0.267605836 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:43:11 localhost podman[92620]: 2025-10-05 08:43:11.114322808 +0000 UTC m=+0.253274151 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:43:11 localhost podman[92601]: 2025-10-05 08:43:11.147573212 +0000 UTC m=+0.293597470 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:43:11 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92614]: 2025-10-05 08:43:11.159871551 +0000 UTC m=+0.307692278 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3) Oct 5 04:43:11 localhost podman[92620]: 2025-10-05 08:43:11.169074704 +0000 UTC m=+0.308026047 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public) Oct 5 04:43:11 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92599]: 2025-10-05 08:43:11.184825936 +0000 UTC m=+0.345301181 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, distribution-scope=public) Oct 5 04:43:11 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92607]: 2025-10-05 08:43:11.201448063 +0000 UTC m=+0.350772982 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, name=rhosp17/openstack-nova-compute, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:43:11 localhost podman[92614]: 2025-10-05 08:43:11.220282191 +0000 UTC m=+0.368102948 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:43:11 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:43:11 localhost podman[92607]: 2025-10-05 08:43:11.587195505 +0000 UTC m=+0.736520464 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:43:11 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:43:11 localhost systemd[1]: tmp-crun.RoPSSc.mount: Deactivated successfully. Oct 5 04:43:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:43:20 localhost systemd[1]: tmp-crun.531ouP.mount: Deactivated successfully. Oct 5 04:43:20 localhost podman[92779]: 2025-10-05 08:43:20.931787745 +0000 UTC m=+0.098683193 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-type=git) Oct 5 04:43:20 localhost podman[92779]: 2025-10-05 08:43:20.982900639 +0000 UTC m=+0.149796067 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, release=1, vcs-type=git, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9) Oct 5 04:43:20 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:43:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:43:30 localhost recover_tripleo_nova_virtqemud[92809]: 63458 Oct 5 04:43:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:43:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:43:30 localhost podman[92806]: 2025-10-05 08:43:30.911218863 +0000 UTC m=+0.080192356 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:43:31 localhost podman[92806]: 2025-10-05 08:43:31.079041775 +0000 UTC m=+0.248015248 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, distribution-scope=public) Oct 5 04:43:31 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:43:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:43:41 localhost systemd[1]: tmp-crun.PxWSde.mount: Deactivated successfully. Oct 5 04:43:41 localhost podman[92870]: 2025-10-05 08:43:41.979824586 +0000 UTC m=+0.114437236 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:43:41 localhost podman[92840]: 2025-10-05 08:43:41.935658792 +0000 UTC m=+0.087456965 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 5 04:43:41 localhost podman[92841]: 2025-10-05 08:43:41.990757007 +0000 UTC m=+0.142410516 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, distribution-scope=public) Oct 5 04:43:42 localhost podman[92840]: 2025-10-05 08:43:42.020057942 +0000 UTC m=+0.171856155 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:45:33, release=1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:43:42 localhost podman[92870]: 2025-10-05 08:43:42.028534615 +0000 UTC m=+0.163147265 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:43:42 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:43:42 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92852]: 2025-10-05 08:43:42.06985393 +0000 UTC m=+0.214345991 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:43:42 localhost podman[92841]: 2025-10-05 08:43:42.079369392 +0000 UTC m=+0.231022951 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1) Oct 5 04:43:42 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92838]: 2025-10-05 08:43:42.149972903 +0000 UTC m=+0.308248043 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:43:42 localhost podman[92839]: 2025-10-05 08:43:42.208846451 +0000 UTC m=+0.367031349 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller) Oct 5 04:43:42 localhost podman[92839]: 2025-10-05 08:43:42.232232023 +0000 UTC m=+0.390416971 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, version=17.1.9, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 5 04:43:42 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92838]: 2025-10-05 08:43:42.261053776 +0000 UTC m=+0.419328916 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, distribution-scope=public) Oct 5 04:43:42 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92865]: 2025-10-05 08:43:42.310217357 +0000 UTC m=+0.447112099 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, distribution-scope=public, tcib_managed=true, release=1, vcs-type=git, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:43:42 localhost podman[92865]: 2025-10-05 08:43:42.344263502 +0000 UTC m=+0.481158244 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond) Oct 5 04:43:42 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92853]: 2025-10-05 08:43:42.419566462 +0000 UTC m=+0.561607686 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, release=2, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 04:43:42 localhost podman[92853]: 2025-10-05 08:43:42.428871048 +0000 UTC m=+0.570912242 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, container_name=collectd, release=2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:43:42 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:43:42 localhost podman[92852]: 2025-10-05 08:43:42.464045735 +0000 UTC m=+0.608537846 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, release=1, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:43:42 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:43:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:43:51 localhost podman[93091]: 2025-10-05 08:43:51.919897841 +0000 UTC m=+0.082980251 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 5 04:43:51 localhost podman[93091]: 2025-10-05 08:43:51.947749897 +0000 UTC m=+0.110832267 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, release=1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:43:51 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:44:01 localhost podman[93117]: 2025-10-05 08:44:01.913022625 +0000 UTC m=+0.077942263 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible) Oct 5 04:44:02 localhost podman[93117]: 2025-10-05 08:44:02.085256938 +0000 UTC m=+0.250176576 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:44:02 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:44:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:44:12 localhost systemd[1]: tmp-crun.njO4Ai.mount: Deactivated successfully. Oct 5 04:44:12 localhost podman[93145]: 2025-10-05 08:44:12.941005779 +0000 UTC m=+0.105028017 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12) Oct 5 04:44:12 localhost podman[93147]: 2025-10-05 08:44:12.984635479 +0000 UTC m=+0.140151523 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc.) Oct 5 04:44:12 localhost podman[93146]: 2025-10-05 08:44:12.99011335 +0000 UTC m=+0.154446916 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9) Oct 5 04:44:13 localhost podman[93145]: 2025-10-05 08:44:13.001082091 +0000 UTC m=+0.165104319 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:28:53, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:44:13 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93146]: 2025-10-05 08:44:13.010782677 +0000 UTC m=+0.175116223 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Oct 5 04:44:13 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93155]: 2025-10-05 08:44:12.923830987 +0000 UTC m=+0.078812567 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.9, build-date=2025-07-21T13:07:52, distribution-scope=public, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64) Oct 5 04:44:13 localhost podman[93147]: 2025-10-05 08:44:13.036013871 +0000 UTC m=+0.191529915 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 5 04:44:13 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93155]: 2025-10-05 08:44:13.051951269 +0000 UTC m=+0.206932859 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, version=17.1.9, container_name=logrotate_crond, release=1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 04:44:13 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93150]: 2025-10-05 08:44:13.095213418 +0000 UTC m=+0.250017283 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:44:13 localhost podman[93149]: 2025-10-05 08:44:13.131350121 +0000 UTC m=+0.290506635 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, version=17.1.9, container_name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:44:13 localhost podman[93148]: 2025-10-05 08:44:13.146064596 +0000 UTC m=+0.304593763 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:44:13 localhost podman[93148]: 2025-10-05 08:44:13.18405321 +0000 UTC m=+0.342582297 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:44:13 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93162]: 2025-10-05 08:44:13.194218729 +0000 UTC m=+0.349081915 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:44:13 localhost podman[93150]: 2025-10-05 08:44:13.215469153 +0000 UTC m=+0.370273008 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, tcib_managed=true, vcs-type=git, container_name=collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, release=2, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, version=17.1.9) Oct 5 04:44:13 localhost podman[93162]: 2025-10-05 08:44:13.218242009 +0000 UTC m=+0.373105225 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1, tcib_managed=true, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:44:13 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:44:13 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:44:13 localhost podman[93149]: 2025-10-05 08:44:13.460058446 +0000 UTC m=+0.619214950 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37) Oct 5 04:44:13 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:44:22 localhost podman[93319]: 2025-10-05 08:44:22.917753304 +0000 UTC m=+0.085104471 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:44:22 localhost podman[93319]: 2025-10-05 08:44:22.970884424 +0000 UTC m=+0.138235601 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, batch=17.1_20250721.1, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.9) Oct 5 04:44:22 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:44:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:44:32 localhost podman[93346]: 2025-10-05 08:44:32.922812594 +0000 UTC m=+0.081642515 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, distribution-scope=public, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:44:33 localhost podman[93346]: 2025-10-05 08:44:33.134634666 +0000 UTC m=+0.293464537 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:44:33 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:44:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:44:43 localhost podman[93377]: 2025-10-05 08:44:43.937784192 +0000 UTC m=+0.090542239 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Oct 5 04:44:43 localhost systemd[1]: tmp-crun.rlM7y2.mount: Deactivated successfully. Oct 5 04:44:43 localhost podman[93395]: 2025-10-05 08:44:43.960017753 +0000 UTC m=+0.103855835 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, distribution-scope=public, container_name=collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Oct 5 04:44:43 localhost podman[93375]: 2025-10-05 08:44:43.994982965 +0000 UTC m=+0.154863568 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Oct 5 04:44:44 localhost podman[93374]: 2025-10-05 08:44:44.035983511 +0000 UTC m=+0.199155735 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, version=17.1.9) Oct 5 04:44:44 localhost podman[93375]: 2025-10-05 08:44:44.037987316 +0000 UTC m=+0.197867909 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible) Oct 5 04:44:44 localhost podman[93375]: unhealthy Oct 5 04:44:44 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:44:44 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:44:44 localhost podman[93376]: 2025-10-05 08:44:44.053896104 +0000 UTC m=+0.214005533 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-07-21T14:45:33, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:44:44 localhost podman[93377]: 2025-10-05 08:44:44.078235192 +0000 UTC m=+0.230993249 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:44:44 localhost podman[93411]: 2025-10-05 08:44:44.089485131 +0000 UTC m=+0.228630064 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:44:44 localhost podman[93395]: 2025-10-05 08:44:44.096995388 +0000 UTC m=+0.240833410 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, release=2, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, container_name=collectd, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, name=rhosp17/openstack-collectd) Oct 5 04:44:44 localhost podman[93376]: 2025-10-05 08:44:44.104077302 +0000 UTC m=+0.264186751 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Oct 5 04:44:44 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93403]: 2025-10-05 08:44:44.105917083 +0000 UTC m=+0.242584108 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, container_name=logrotate_crond, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:44:44 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:44:44 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93411]: 2025-10-05 08:44:44.142104058 +0000 UTC m=+0.281248991 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public) Oct 5 04:44:44 localhost podman[93374]: 2025-10-05 08:44:44.150012595 +0000 UTC m=+0.313184879 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, version=17.1.9, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:44:44 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:44:44 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93383]: 2025-10-05 08:44:44.211981448 +0000 UTC m=+0.359397708 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:44:44 localhost podman[93403]: 2025-10-05 08:44:44.238299011 +0000 UTC m=+0.374966046 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=logrotate_crond, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1) Oct 5 04:44:44 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:44:44 localhost podman[93383]: 2025-10-05 08:44:44.579840818 +0000 UTC m=+0.727257048 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, architecture=x86_64, release=1, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:44:44 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:44:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:44:53 localhost systemd[1]: tmp-crun.cz0Qjv.mount: Deactivated successfully. Oct 5 04:44:53 localhost podman[93664]: 2025-10-05 08:44:53.923607267 +0000 UTC m=+0.088778522 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, config_id=tripleo_step5, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible) Oct 5 04:44:53 localhost podman[93664]: 2025-10-05 08:44:53.960616484 +0000 UTC m=+0.125787749 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:44:53 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:45:03 localhost podman[93705]: 2025-10-05 08:45:03.891003553 +0000 UTC m=+0.059121086 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:45:04 localhost podman[93705]: 2025-10-05 08:45:04.101332654 +0000 UTC m=+0.269450127 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:45:04 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5645 writes, 24K keys, 5645 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5645 writes, 715 syncs, 7.90 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8 writes, 19 keys, 8 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 8 writes, 4 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:45:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:45:11 localhost recover_tripleo_nova_virtqemud[93735]: 63458 Oct 5 04:45:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:45:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:45:14 localhost systemd[1]: tmp-crun.enf4Sw.mount: Deactivated successfully. Oct 5 04:45:14 localhost podman[93739]: 2025-10-05 08:45:14.945680582 +0000 UTC m=+0.103052013 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:45:14 localhost systemd[1]: tmp-crun.P2HLrQ.mount: Deactivated successfully. Oct 5 04:45:14 localhost podman[93739]: 2025-10-05 08:45:14.954911796 +0000 UTC m=+0.112283247 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:45:14 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:45:14 localhost podman[93736]: 2025-10-05 08:45:14.996099328 +0000 UTC m=+0.156686117 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible) Oct 5 04:45:15 localhost podman[93738]: 2025-10-05 08:45:15.007983814 +0000 UTC m=+0.155724040 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T14:45:33, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:45:15 localhost podman[93761]: 2025-10-05 08:45:14.9579902 +0000 UTC m=+0.096213465 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Oct 5 04:45:15 localhost podman[93761]: 2025-10-05 08:45:15.041345971 +0000 UTC m=+0.179569246 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:45:15 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93737]: 2025-10-05 08:45:15.049022363 +0000 UTC m=+0.199334630 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Oct 5 04:45:15 localhost podman[93738]: 2025-10-05 08:45:15.059063788 +0000 UTC m=+0.206803964 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, distribution-scope=public, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:45:15 localhost podman[93736]: 2025-10-05 08:45:15.073198737 +0000 UTC m=+0.233785536 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git) Oct 5 04:45:15 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93737]: 2025-10-05 08:45:15.097189996 +0000 UTC m=+0.247502223 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, tcib_managed=true, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller) Oct 5 04:45:15 localhost podman[93737]: unhealthy Oct 5 04:45:15 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:45:15 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:45:15 localhost podman[93745]: 2025-10-05 08:45:15.110616146 +0000 UTC m=+0.250204699 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:45:15 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93751]: 2025-10-05 08:45:15.111784217 +0000 UTC m=+0.257728624 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:45:15 localhost podman[93760]: 2025-10-05 08:45:15.209199025 +0000 UTC m=+0.351021158 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, io.buildah.version=1.33.12) Oct 5 04:45:15 localhost podman[93751]: 2025-10-05 08:45:15.241464971 +0000 UTC m=+0.387409428 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 5 04:45:15 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93760]: 2025-10-05 08:45:15.296038162 +0000 UTC m=+0.437860325 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, container_name=logrotate_crond, vcs-type=git) Oct 5 04:45:15 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:45:15 localhost podman[93745]: 2025-10-05 08:45:15.512235913 +0000 UTC m=+0.651824426 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12) Oct 5 04:45:15 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:45:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:45:24 localhost podman[93912]: 2025-10-05 08:45:24.91639738 +0000 UTC m=+0.083927758 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., release=1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, version=17.1.9) Oct 5 04:45:24 localhost podman[93912]: 2025-10-05 08:45:24.94514685 +0000 UTC m=+0.112677248 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, build-date=2025-07-21T14:48:37) Oct 5 04:45:24 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:45:34 localhost podman[93938]: 2025-10-05 08:45:34.919439847 +0000 UTC m=+0.083928488 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:45:35 localhost podman[93938]: 2025-10-05 08:45:35.089867011 +0000 UTC m=+0.254355672 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public) Oct 5 04:45:35 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:45:45 localhost systemd[1]: tmp-crun.wsgtqp.mount: Deactivated successfully. Oct 5 04:45:45 localhost podman[93975]: 2025-10-05 08:45:45.941353575 +0000 UTC m=+0.095053884 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:45:45 localhost systemd[1]: tmp-crun.m1YVX9.mount: Deactivated successfully. Oct 5 04:45:45 localhost podman[93969]: 2025-10-05 08:45:45.978821165 +0000 UTC m=+0.135080864 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.9, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:45:46 localhost podman[93969]: 2025-10-05 08:45:46.000215473 +0000 UTC m=+0.156475152 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 5 04:45:46 localhost podman[94000]: 2025-10-05 08:45:45.954361523 +0000 UTC m=+0.090631302 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, version=17.1.9) Oct 5 04:45:46 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:45:46 localhost podman[93993]: 2025-10-05 08:45:46.012017528 +0000 UTC m=+0.146832388 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, architecture=x86_64, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container) Oct 5 04:45:46 localhost podman[94000]: 2025-10-05 08:45:46.03394784 +0000 UTC m=+0.170217659 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:45:46 localhost podman[93993]: 2025-10-05 08:45:46.041722444 +0000 UTC m=+0.176537304 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-cron, version=17.1.9, architecture=x86_64, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 5 04:45:46 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:45:46 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:45:46 localhost podman[93975]: 2025-10-05 08:45:46.0750575 +0000 UTC m=+0.228757859 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, release=1) Oct 5 04:45:46 localhost podman[93968]: 2025-10-05 08:45:46.082285569 +0000 UTC m=+0.242389534 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, container_name=ovn_controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, release=1, io.openshift.expose-services=) Oct 5 04:45:46 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:45:46 localhost podman[93987]: 2025-10-05 08:45:46.042818314 +0000 UTC m=+0.188598055 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-collectd, release=2, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:45:46 localhost podman[93981]: 2025-10-05 08:45:46.120461007 +0000 UTC m=+0.261552609 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, vcs-type=git, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:45:46 localhost podman[93968]: 2025-10-05 08:45:46.123228423 +0000 UTC m=+0.283332388 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1) Oct 5 04:45:46 localhost podman[93968]: unhealthy Oct 5 04:45:46 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:45:46 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:45:46 localhost podman[93987]: 2025-10-05 08:45:46.170910684 +0000 UTC m=+0.316690455 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, name=rhosp17/openstack-collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., distribution-scope=public, release=2, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 04:45:46 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:45:46 localhost podman[93967]: 2025-10-05 08:45:45.992284214 +0000 UTC m=+0.155874374 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible) Oct 5 04:45:46 localhost podman[93967]: 2025-10-05 08:45:46.22717759 +0000 UTC m=+0.390767800 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53) Oct 5 04:45:46 localhost podman[93967]: unhealthy Oct 5 04:45:46 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:45:46 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:45:46 localhost podman[93981]: 2025-10-05 08:45:46.493077218 +0000 UTC m=+0.634168810 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:45:46 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:45:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:45:55 localhost podman[94213]: 2025-10-05 08:45:55.394270281 +0000 UTC m=+0.092713609 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 04:45:55 localhost podman[94213]: 2025-10-05 08:45:55.453326534 +0000 UTC m=+0.151769822 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, release=1) Oct 5 04:45:55 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:45:55 localhost podman[94293]: Oct 5 04:45:55 localhost podman[94293]: 2025-10-05 08:45:55.997638844 +0000 UTC m=+0.072601147 container create 3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_goldwasser, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=) Oct 5 04:45:56 localhost systemd[1]: Started libpod-conmon-3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115.scope. Oct 5 04:45:56 localhost systemd[1]: Started libcrun container. Oct 5 04:45:56 localhost podman[94293]: 2025-10-05 08:45:55.966752955 +0000 UTC m=+0.041715258 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:45:56 localhost podman[94293]: 2025-10-05 08:45:56.078799104 +0000 UTC m=+0.153761437 container init 3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_goldwasser, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, name=rhceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:45:56 localhost podman[94293]: 2025-10-05 08:45:56.093912719 +0000 UTC m=+0.168875022 container start 3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_goldwasser, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:45:56 localhost podman[94293]: 2025-10-05 08:45:56.094237889 +0000 UTC m=+0.169200192 container attach 3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_goldwasser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 04:45:56 localhost ecstatic_goldwasser[94308]: 167 167 Oct 5 04:45:56 localhost systemd[1]: libpod-3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115.scope: Deactivated successfully. Oct 5 04:45:56 localhost podman[94293]: 2025-10-05 08:45:56.097710004 +0000 UTC m=+0.172672357 container died 3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_goldwasser, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main) Oct 5 04:45:56 localhost podman[94313]: 2025-10-05 08:45:56.189494497 +0000 UTC m=+0.081296485 container remove 3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_goldwasser, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, architecture=x86_64) Oct 5 04:45:56 localhost systemd[1]: libpod-conmon-3d82420ac0a11e56ae9d4953852af40acfd84a210ae06b4ca5088b9ef94d8115.scope: Deactivated successfully. Oct 5 04:45:56 localhost podman[94335]: Oct 5 04:45:56 localhost podman[94335]: 2025-10-05 08:45:56.381530325 +0000 UTC m=+0.069624215 container create 524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_franklin, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, architecture=x86_64, name=rhceph) Oct 5 04:45:56 localhost systemd[1]: tmp-crun.zNusw4.mount: Deactivated successfully. Oct 5 04:45:56 localhost systemd[1]: var-lib-containers-storage-overlay-7f491f10e8c323485c1524aef16eb27b1c1ff15131acc25adee68f0aaef8fd6b-merged.mount: Deactivated successfully. Oct 5 04:45:56 localhost systemd[1]: Started libpod-conmon-524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531.scope. Oct 5 04:45:56 localhost systemd[1]: Started libcrun container. Oct 5 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ee92b55fa3397bed4e4940280983e3e8bd5b2850b3daa25e61751f7c8f441/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ee92b55fa3397bed4e4940280983e3e8bd5b2850b3daa25e61751f7c8f441/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5ee92b55fa3397bed4e4940280983e3e8bd5b2850b3daa25e61751f7c8f441/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 04:45:56 localhost podman[94335]: 2025-10-05 08:45:56.449624166 +0000 UTC m=+0.137718066 container init 524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_franklin, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhceph ceph) Oct 5 04:45:56 localhost podman[94335]: 2025-10-05 08:45:56.459362664 +0000 UTC m=+0.147456534 container start 524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_franklin, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 04:45:56 localhost podman[94335]: 2025-10-05 08:45:56.459499808 +0000 UTC m=+0.147593728 container attach 524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_franklin, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, name=rhceph) Oct 5 04:45:56 localhost podman[94335]: 2025-10-05 08:45:56.361223437 +0000 UTC m=+0.049317357 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 04:45:57 localhost cranky_franklin[94351]: [ Oct 5 04:45:57 localhost cranky_franklin[94351]: { Oct 5 04:45:57 localhost cranky_franklin[94351]: "available": false, Oct 5 04:45:57 localhost cranky_franklin[94351]: "ceph_device": false, Oct 5 04:45:57 localhost cranky_franklin[94351]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 04:45:57 localhost cranky_franklin[94351]: "lsm_data": {}, Oct 5 04:45:57 localhost cranky_franklin[94351]: "lvs": [], Oct 5 04:45:57 localhost cranky_franklin[94351]: "path": "/dev/sr0", Oct 5 04:45:57 localhost cranky_franklin[94351]: "rejected_reasons": [ Oct 5 04:45:57 localhost cranky_franklin[94351]: "Insufficient space (<5GB)", Oct 5 04:45:57 localhost cranky_franklin[94351]: "Has a FileSystem" Oct 5 04:45:57 localhost cranky_franklin[94351]: ], Oct 5 04:45:57 localhost cranky_franklin[94351]: "sys_api": { Oct 5 04:45:57 localhost cranky_franklin[94351]: "actuators": null, Oct 5 04:45:57 localhost cranky_franklin[94351]: "device_nodes": "sr0", Oct 5 04:45:57 localhost cranky_franklin[94351]: "human_readable_size": "482.00 KB", Oct 5 04:45:57 localhost cranky_franklin[94351]: "id_bus": "ata", Oct 5 04:45:57 localhost cranky_franklin[94351]: "model": "QEMU DVD-ROM", Oct 5 04:45:57 localhost cranky_franklin[94351]: "nr_requests": "2", Oct 5 04:45:57 localhost cranky_franklin[94351]: "partitions": {}, Oct 5 04:45:57 localhost cranky_franklin[94351]: "path": "/dev/sr0", Oct 5 04:45:57 localhost cranky_franklin[94351]: "removable": "1", Oct 5 04:45:57 localhost cranky_franklin[94351]: "rev": "2.5+", Oct 5 04:45:57 localhost cranky_franklin[94351]: "ro": "0", Oct 5 04:45:57 localhost cranky_franklin[94351]: "rotational": "1", Oct 5 04:45:57 localhost cranky_franklin[94351]: "sas_address": "", Oct 5 04:45:57 localhost cranky_franklin[94351]: "sas_device_handle": "", Oct 5 04:45:57 localhost cranky_franklin[94351]: "scheduler_mode": "mq-deadline", Oct 5 04:45:57 localhost cranky_franklin[94351]: "sectors": 0, Oct 5 04:45:57 localhost cranky_franklin[94351]: "sectorsize": "2048", Oct 5 04:45:57 localhost cranky_franklin[94351]: "size": 493568.0, Oct 5 04:45:57 localhost cranky_franklin[94351]: "support_discard": "0", Oct 5 04:45:57 localhost cranky_franklin[94351]: "type": "disk", Oct 5 04:45:57 localhost cranky_franklin[94351]: "vendor": "QEMU" Oct 5 04:45:57 localhost cranky_franklin[94351]: } Oct 5 04:45:57 localhost cranky_franklin[94351]: } Oct 5 04:45:57 localhost cranky_franklin[94351]: ] Oct 5 04:45:57 localhost systemd[1]: libpod-524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531.scope: Deactivated successfully. Oct 5 04:45:57 localhost podman[94335]: 2025-10-05 08:45:57.296111012 +0000 UTC m=+0.984204922 container died 524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_franklin, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public) Oct 5 04:45:57 localhost systemd[1]: var-lib-containers-storage-overlay-7f5ee92b55fa3397bed4e4940280983e3e8bd5b2850b3daa25e61751f7c8f441-merged.mount: Deactivated successfully. Oct 5 04:45:57 localhost podman[96128]: 2025-10-05 08:45:57.390618669 +0000 UTC m=+0.081249094 container remove 524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_franklin, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, version=7, name=rhceph, release=553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container) Oct 5 04:45:57 localhost systemd[1]: libpod-conmon-524ba372f6d47141938f070fc8d19fa5ad13610e746001967b8dc5d2c767f531.scope: Deactivated successfully. Oct 5 04:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:46:05 localhost systemd[1]: tmp-crun.dAQPlg.mount: Deactivated successfully. Oct 5 04:46:05 localhost podman[96157]: 2025-10-05 08:46:05.937522284 +0000 UTC m=+0.099582147 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_id=tripleo_step1, container_name=metrics_qdr, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:46:06 localhost podman[96157]: 2025-10-05 08:46:06.153286024 +0000 UTC m=+0.315345857 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=metrics_qdr, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 04:46:06 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:46:16 localhost systemd[1]: tmp-crun.8qVnsv.mount: Deactivated successfully. Oct 5 04:46:16 localhost podman[96214]: 2025-10-05 08:46:16.967192237 +0000 UTC m=+0.101163161 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:46:16 localhost podman[96187]: 2025-10-05 08:46:16.939396674 +0000 UTC m=+0.098417127 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-ovn-controller, release=1) Oct 5 04:46:16 localhost podman[96186]: 2025-10-05 08:46:16.989946984 +0000 UTC m=+0.151463494 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 04:46:16 localhost podman[96186]: 2025-10-05 08:46:16.998684554 +0000 UTC m=+0.160201044 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:46:17 localhost podman[96211]: 2025-10-05 08:46:17.012926815 +0000 UTC m=+0.150547789 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:46:17 localhost podman[96210]: 2025-10-05 08:46:17.038011304 +0000 UTC m=+0.178614470 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:46:17 localhost podman[96200]: 2025-10-05 08:46:17.064890033 +0000 UTC m=+0.196974745 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:46:17 localhost podman[96186]: unhealthy Oct 5 04:46:17 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:17 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:46:17 localhost podman[96187]: 2025-10-05 08:46:17.171925675 +0000 UTC m=+0.330946128 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:46:17 localhost podman[96187]: unhealthy Oct 5 04:46:17 localhost podman[96194]: 2025-10-05 08:46:17.177351923 +0000 UTC m=+0.332053726 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container) Oct 5 04:46:17 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:17 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:46:17 localhost podman[96194]: 2025-10-05 08:46:17.189146488 +0000 UTC m=+0.343848281 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 04:46:17 localhost podman[96211]: 2025-10-05 08:46:17.19540678 +0000 UTC m=+0.333027754 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, name=rhosp17/openstack-cron, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true) Oct 5 04:46:17 localhost podman[96214]: 2025-10-05 08:46:17.202414072 +0000 UTC m=+0.336385056 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47) Oct 5 04:46:17 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:46:17 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:46:17 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:46:17 localhost podman[96210]: 2025-10-05 08:46:17.222570847 +0000 UTC m=+0.363173993 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, container_name=collectd, release=2, version=17.1.9, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:46:17 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:46:17 localhost podman[96188]: 2025-10-05 08:46:17.278973767 +0000 UTC m=+0.426697419 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:46:17 localhost podman[96188]: 2025-10-05 08:46:17.308157689 +0000 UTC m=+0.455881291 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, release=1, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:46:17 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:46:17 localhost podman[96200]: 2025-10-05 08:46:17.403005296 +0000 UTC m=+0.535089968 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:46:17 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:46:17 localhost systemd[1]: tmp-crun.JlOkmW.mount: Deactivated successfully. Oct 5 04:46:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:46:25 localhost systemd[1]: tmp-crun.a4CGwk.mount: Deactivated successfully. Oct 5 04:46:25 localhost podman[96363]: 2025-10-05 08:46:25.915559288 +0000 UTC m=+0.082649163 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:46:25 localhost podman[96363]: 2025-10-05 08:46:25.967120445 +0000 UTC m=+0.134210260 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., release=1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:46:25 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:46:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:46:36 localhost podman[96390]: 2025-10-05 08:46:36.884994344 +0000 UTC m=+0.060599066 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step1, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:46:37 localhost podman[96390]: 2025-10-05 08:46:37.10092434 +0000 UTC m=+0.276529052 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, container_name=metrics_qdr, vcs-type=git, release=1, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:46:37 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:46:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:46:47 localhost podman[96445]: 2025-10-05 08:46:47.93439975 +0000 UTC m=+0.072693899 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, release=1, version=17.1.9, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4) Oct 5 04:46:47 localhost podman[96445]: 2025-10-05 08:46:47.957212136 +0000 UTC m=+0.095506295 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git) Oct 5 04:46:47 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96435]: 2025-10-05 08:46:48.001838983 +0000 UTC m=+0.145744717 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, version=17.1.9, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, release=2, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd) Oct 5 04:46:48 localhost podman[96421]: 2025-10-05 08:46:48.037556045 +0000 UTC m=+0.194125747 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Oct 5 04:46:48 localhost podman[96435]: 2025-10-05 08:46:48.038029628 +0000 UTC m=+0.181935362 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3) Oct 5 04:46:48 localhost podman[96440]: 2025-10-05 08:46:48.049568335 +0000 UTC m=+0.189819938 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64) Oct 5 04:46:48 localhost podman[96440]: 2025-10-05 08:46:48.05704857 +0000 UTC m=+0.197300143 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:46:48 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96420]: 2025-10-05 08:46:48.015974172 +0000 UTC m=+0.176467112 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:46:48 localhost podman[96420]: 2025-10-05 08:46:48.09598103 +0000 UTC m=+0.256473990 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:46:48 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96420]: unhealthy Oct 5 04:46:48 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:48 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:46:48 localhost podman[96421]: 2025-10-05 08:46:48.110562051 +0000 UTC m=+0.267131763 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, release=1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:46:48 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96432]: 2025-10-05 08:46:48.096255137 +0000 UTC m=+0.244407708 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, release=1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:46:48 localhost podman[96425]: 2025-10-05 08:46:48.162011545 +0000 UTC m=+0.313768975 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container) Oct 5 04:46:48 localhost podman[96425]: 2025-10-05 08:46:48.169043978 +0000 UTC m=+0.320801348 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:46:48 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:46:48 localhost podman[96419]: 2025-10-05 08:46:48.206977551 +0000 UTC m=+0.370102683 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 04:46:48 localhost podman[96419]: 2025-10-05 08:46:48.222373075 +0000 UTC m=+0.385498177 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:46:48 localhost podman[96419]: unhealthy Oct 5 04:46:48 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:46:48 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:46:48 localhost podman[96432]: 2025-10-05 08:46:48.465344722 +0000 UTC m=+0.613497263 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target) Oct 5 04:46:48 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:46:48 localhost systemd[1]: tmp-crun.qqI0FP.mount: Deactivated successfully. Oct 5 04:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:46:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:46:56 localhost recover_tripleo_nova_virtqemud[96589]: 63458 Oct 5 04:46:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:46:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:46:56 localhost podman[96587]: 2025-10-05 08:46:56.90721863 +0000 UTC m=+0.079114326 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1) Oct 5 04:46:56 localhost podman[96587]: 2025-10-05 08:46:56.959118656 +0000 UTC m=+0.131014402 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:46:56 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:47:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:47:07 localhost podman[96741]: 2025-10-05 08:47:07.920597777 +0000 UTC m=+0.082602331 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-type=git, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, architecture=x86_64, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12) Oct 5 04:47:08 localhost podman[96741]: 2025-10-05 08:47:08.142126815 +0000 UTC m=+0.304131339 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 5 04:47:08 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:47:18 localhost systemd[1]: tmp-crun.5RYYGY.mount: Deactivated successfully. Oct 5 04:47:18 localhost podman[96804]: 2025-10-05 08:47:18.964005446 +0000 UTC m=+0.098763755 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:47:18 localhost podman[96771]: 2025-10-05 08:47:18.93829678 +0000 UTC m=+0.098442496 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:47:19 localhost podman[96772]: 2025-10-05 08:47:18.991683737 +0000 UTC m=+0.148460351 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, release=1, container_name=ovn_controller, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:47:19 localhost podman[96774]: 2025-10-05 08:47:19.046187036 +0000 UTC m=+0.197118840 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:47:19 localhost podman[96804]: 2025-10-05 08:47:19.067338177 +0000 UTC m=+0.202096516 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:47:19 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:47:19 localhost podman[96792]: 2025-10-05 08:47:19.115159251 +0000 UTC m=+0.249860028 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:47:19 localhost podman[96774]: 2025-10-05 08:47:19.135311094 +0000 UTC m=+0.286242878 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:47:19 localhost podman[96773]: 2025-10-05 08:47:19.145937697 +0000 UTC m=+0.300124340 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:47:19 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:47:19 localhost podman[96780]: 2025-10-05 08:47:19.202760878 +0000 UTC m=+0.352283022 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, release=1, version=17.1.9, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, managed_by=tripleo_ansible) Oct 5 04:47:19 localhost podman[96792]: 2025-10-05 08:47:19.224025673 +0000 UTC m=+0.358726510 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4) Oct 5 04:47:19 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:47:19 localhost podman[96771]: 2025-10-05 08:47:19.275375184 +0000 UTC m=+0.435520870 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.expose-services=) Oct 5 04:47:19 localhost podman[96771]: unhealthy Oct 5 04:47:19 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:19 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:47:19 localhost podman[96772]: 2025-10-05 08:47:19.325870972 +0000 UTC m=+0.482647566 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 04:47:19 localhost podman[96772]: unhealthy Oct 5 04:47:19 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:19 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:47:19 localhost podman[96786]: 2025-10-05 08:47:19.406445787 +0000 UTC m=+0.550548172 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, release=2, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, container_name=collectd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team) Oct 5 04:47:19 localhost podman[96786]: 2025-10-05 08:47:19.417053748 +0000 UTC m=+0.561156123 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20250721.1, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3) Oct 5 04:47:19 localhost podman[96773]: 2025-10-05 08:47:19.430265742 +0000 UTC m=+0.584452395 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, tcib_managed=true, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4) Oct 5 04:47:19 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:47:19 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:47:19 localhost podman[96780]: 2025-10-05 08:47:19.580164081 +0000 UTC m=+0.729686305 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=nova_migration_target, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:47:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:47:27 localhost podman[96943]: 2025-10-05 08:47:27.911680848 +0000 UTC m=+0.077551962 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step5, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:47:27 localhost podman[96943]: 2025-10-05 08:47:27.943222114 +0000 UTC m=+0.109093188 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T14:48:37, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:47:27 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:47:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:47:38 localhost podman[96969]: 2025-10-05 08:47:38.912822375 +0000 UTC m=+0.081841990 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., release=1) Oct 5 04:47:39 localhost podman[96969]: 2025-10-05 08:47:39.102665233 +0000 UTC m=+0.271684818 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:47:39 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:47:49 localhost systemd[1]: tmp-crun.vqM78d.mount: Deactivated successfully. Oct 5 04:47:49 localhost podman[96999]: 2025-10-05 08:47:49.92216393 +0000 UTC m=+0.089979064 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1) Oct 5 04:47:49 localhost podman[97005]: 2025-10-05 08:47:49.937880302 +0000 UTC m=+0.095288530 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:47:50 localhost podman[97012]: 2025-10-05 08:47:50.004483403 +0000 UTC m=+0.157260114 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:47:50 localhost podman[97000]: 2025-10-05 08:47:50.035936507 +0000 UTC m=+0.196657345 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:45:33, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:47:50 localhost podman[96999]: 2025-10-05 08:47:50.036480442 +0000 UTC m=+0.204295546 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, container_name=ovn_controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Oct 5 04:47:50 localhost podman[96999]: unhealthy Oct 5 04:47:50 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:50 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:47:50 localhost podman[97027]: 2025-10-05 08:47:49.96327291 +0000 UTC m=+0.109028117 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 5 04:47:50 localhost podman[97005]: 2025-10-05 08:47:50.068509922 +0000 UTC m=+0.225918140 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, release=1, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:47:50 localhost podman[97021]: 2025-10-05 08:47:49.988214416 +0000 UTC m=+0.134825587 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible) Oct 5 04:47:50 localhost podman[97027]: 2025-10-05 08:47:50.091754122 +0000 UTC m=+0.237509309 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, release=1, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:47:50 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:47:50 localhost podman[97021]: 2025-10-05 08:47:50.117297294 +0000 UTC m=+0.263908495 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, managed_by=tripleo_ansible, release=2, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc.) Oct 5 04:47:50 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:47:50 localhost podman[97000]: 2025-10-05 08:47:50.165110458 +0000 UTC m=+0.325831306 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Oct 5 04:47:50 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:47:50 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:47:50 localhost podman[96998]: 2025-10-05 08:47:50.132491891 +0000 UTC m=+0.299571465 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:47:50 localhost podman[97035]: 2025-10-05 08:47:50.208121209 +0000 UTC m=+0.348190060 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi) Oct 5 04:47:50 localhost podman[96998]: 2025-10-05 08:47:50.21211626 +0000 UTC m=+0.379195834 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:47:50 localhost podman[96998]: unhealthy Oct 5 04:47:50 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:47:50 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:47:50 localhost podman[97035]: 2025-10-05 08:47:50.229459396 +0000 UTC m=+0.369528267 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-07-21T15:29:47, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:47:50 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:47:50 localhost podman[97012]: 2025-10-05 08:47:50.389392752 +0000 UTC m=+0.542169513 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=nova_migration_target) Oct 5 04:47:50 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:47:50 localhost systemd[1]: tmp-crun.g9jIBl.mount: Deactivated successfully. Oct 5 04:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:47:58 localhost podman[97161]: 2025-10-05 08:47:58.913408029 +0000 UTC m=+0.080543535 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Oct 5 04:47:58 localhost podman[97161]: 2025-10-05 08:47:58.965368457 +0000 UTC m=+0.132503953 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:47:58 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:48:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:48:09 localhost recover_tripleo_nova_virtqemud[97267]: 63458 Oct 5 04:48:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:48:09 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:48:09 localhost podman[97265]: 2025-10-05 08:48:09.929077737 +0000 UTC m=+0.087384702 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Oct 5 04:48:10 localhost podman[97265]: 2025-10-05 08:48:10.127335536 +0000 UTC m=+0.285642491 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 04:48:10 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:48:20 localhost systemd[1]: tmp-crun.oHQrcz.mount: Deactivated successfully. Oct 5 04:48:20 localhost podman[97309]: 2025-10-05 08:48:20.946602175 +0000 UTC m=+0.100621377 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=collectd, maintainer=OpenStack TripleO Team) Oct 5 04:48:20 localhost podman[97309]: 2025-10-05 08:48:20.952879337 +0000 UTC m=+0.106898559 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:48:20 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:48:20 localhost podman[97316]: 2025-10-05 08:48:20.989978606 +0000 UTC m=+0.141869399 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:07:52, tcib_managed=true, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git) Oct 5 04:48:21 localhost podman[97299]: 2025-10-05 08:48:21.034691125 +0000 UTC m=+0.187805492 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step3, managed_by=tripleo_ansible) Oct 5 04:48:21 localhost podman[97299]: 2025-10-05 08:48:21.040839865 +0000 UTC m=+0.193954222 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid) Oct 5 04:48:21 localhost podman[97319]: 2025-10-05 08:48:21.046359417 +0000 UTC m=+0.194502117 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:48:21 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:48:21 localhost podman[97297]: 2025-10-05 08:48:21.089009519 +0000 UTC m=+0.256562473 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 04:48:21 localhost podman[97308]: 2025-10-05 08:48:21.09562577 +0000 UTC m=+0.243721429 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64) Oct 5 04:48:21 localhost podman[97316]: 2025-10-05 08:48:21.125331487 +0000 UTC m=+0.277222320 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, name=rhosp17/openstack-cron, release=1, version=17.1.9, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=) Oct 5 04:48:21 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:48:21 localhost podman[97296]: 2025-10-05 08:48:21.142870369 +0000 UTC m=+0.302961508 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Oct 5 04:48:21 localhost podman[97319]: 2025-10-05 08:48:21.164389181 +0000 UTC m=+0.312531831 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:48:21 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:48:21 localhost podman[97298]: 2025-10-05 08:48:20.924406265 +0000 UTC m=+0.087324981 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 5 04:48:21 localhost podman[97296]: 2025-10-05 08:48:21.183137425 +0000 UTC m=+0.343228524 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible) Oct 5 04:48:21 localhost podman[97296]: unhealthy Oct 5 04:48:21 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:21 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:48:21 localhost podman[97298]: 2025-10-05 08:48:21.209068369 +0000 UTC m=+0.371987055 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, com.redhat.component=openstack-ceilometer-compute-container) Oct 5 04:48:21 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:48:21 localhost podman[97297]: 2025-10-05 08:48:21.228162483 +0000 UTC m=+0.395715417 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, config_id=tripleo_step4) Oct 5 04:48:21 localhost podman[97297]: unhealthy Oct 5 04:48:21 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:21 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:48:21 localhost podman[97308]: 2025-10-05 08:48:21.514722679 +0000 UTC m=+0.662818378 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:48:21 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:48:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:48:29 localhost podman[97468]: 2025-10-05 08:48:29.905096211 +0000 UTC m=+0.077839720 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, distribution-scope=public) Oct 5 04:48:29 localhost podman[97468]: 2025-10-05 08:48:29.936256218 +0000 UTC m=+0.108999707 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:48:29 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:48:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:48:40 localhost podman[97493]: 2025-10-05 08:48:40.917840778 +0000 UTC m=+0.090954472 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:48:41 localhost podman[97493]: 2025-10-05 08:48:41.099174342 +0000 UTC m=+0.272287976 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, build-date=2025-07-21T13:07:59, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:48:41 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:48:51 localhost podman[97521]: 2025-10-05 08:48:51.946205004 +0000 UTC m=+0.113497801 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9) Oct 5 04:48:51 localhost podman[97530]: 2025-10-05 08:48:51.996185918 +0000 UTC m=+0.148441661 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:48:52 localhost podman[97521]: 2025-10-05 08:48:52.048551877 +0000 UTC m=+0.215844644 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:48:52 localhost podman[97521]: unhealthy Oct 5 04:48:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:48:52 localhost podman[97537]: 2025-10-05 08:48:52.065810441 +0000 UTC m=+0.217829607 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, config_id=tripleo_step3, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, com.redhat.component=openstack-collectd-container) Oct 5 04:48:52 localhost podman[97537]: 2025-10-05 08:48:52.104248027 +0000 UTC m=+0.256267183 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9) Oct 5 04:48:52 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:48:52 localhost podman[97544]: 2025-10-05 08:48:52.146238232 +0000 UTC m=+0.296484320 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, architecture=x86_64, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:48:52 localhost podman[97523]: 2025-10-05 08:48:52.105128731 +0000 UTC m=+0.266806463 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.expose-services=) Oct 5 04:48:52 localhost podman[97548]: 2025-10-05 08:48:52.202841037 +0000 UTC m=+0.348660383 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, tcib_managed=true, managed_by=tripleo_ansible, release=1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:48:52 localhost podman[97544]: 2025-10-05 08:48:52.206964031 +0000 UTC m=+0.357210119 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, release=1, tcib_managed=true, build-date=2025-07-21T13:07:52) Oct 5 04:48:52 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:48:52 localhost podman[97523]: 2025-10-05 08:48:52.234078995 +0000 UTC m=+0.395756777 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64) Oct 5 04:48:52 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:48:52 localhost podman[97548]: 2025-10-05 08:48:52.257186691 +0000 UTC m=+0.403006047 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, distribution-scope=public) Oct 5 04:48:52 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:48:52 localhost podman[97524]: 2025-10-05 08:48:52.31027696 +0000 UTC m=+0.461176096 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public) Oct 5 04:48:52 localhost podman[97524]: 2025-10-05 08:48:52.323082832 +0000 UTC m=+0.473981988 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15) Oct 5 04:48:52 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:48:52 localhost podman[97522]: 2025-10-05 08:48:52.363961696 +0000 UTC m=+0.527588361 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, release=1, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:48:52 localhost podman[97522]: 2025-10-05 08:48:52.374945348 +0000 UTC m=+0.538571993 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., container_name=ovn_controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:48:52 localhost podman[97522]: unhealthy Oct 5 04:48:52 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:48:52 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:48:52 localhost podman[97530]: 2025-10-05 08:48:52.417279411 +0000 UTC m=+0.569535214 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:48:52 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:48:52 localhost systemd[1]: tmp-crun.mwxxCp.mount: Deactivated successfully. Oct 5 04:49:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:49:00 localhost podman[97691]: 2025-10-05 08:49:00.903884868 +0000 UTC m=+0.072615307 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:49:00 localhost podman[97691]: 2025-10-05 08:49:00.959109516 +0000 UTC m=+0.127839985 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, batch=17.1_20250721.1, container_name=nova_compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step5) Oct 5 04:49:00 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:49:03 localhost podman[97820]: 2025-10-05 08:49:03.373443443 +0000 UTC m=+0.059977300 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 04:49:03 localhost podman[97820]: 2025-10-05 08:49:03.472041433 +0000 UTC m=+0.158575300 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=) Oct 5 04:49:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:49:11 localhost podman[97963]: 2025-10-05 08:49:11.923632648 +0000 UTC m=+0.084162214 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:59, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:49:12 localhost podman[97963]: 2025-10-05 08:49:12.133168247 +0000 UTC m=+0.293697803 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public) Oct 5 04:49:12 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:49:22 localhost podman[97992]: 2025-10-05 08:49:22.938136712 +0000 UTC m=+0.097405558 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:49:22 localhost podman[97992]: 2025-10-05 08:49:22.981169834 +0000 UTC m=+0.140438660 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc.) Oct 5 04:49:22 localhost podman[97992]: unhealthy Oct 5 04:49:22 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:22 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:49:22 localhost podman[97998]: 2025-10-05 08:49:22.994992994 +0000 UTC m=+0.138910799 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3) Oct 5 04:49:23 localhost podman[97998]: 2025-10-05 08:49:23.008145796 +0000 UTC m=+0.152063571 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9) Oct 5 04:49:23 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[98018]: 2025-10-05 08:49:23.051762224 +0000 UTC m=+0.187078272 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47) Oct 5 04:49:23 localhost podman[98018]: 2025-10-05 08:49:23.108581717 +0000 UTC m=+0.243897755 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:49:23 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[97994]: 2025-10-05 08:49:23.149978014 +0000 UTC m=+0.302216187 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute) Oct 5 04:49:23 localhost podman[97993]: 2025-10-05 08:49:23.196306627 +0000 UTC m=+0.352307534 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Oct 5 04:49:23 localhost podman[97994]: 2025-10-05 08:49:23.204140303 +0000 UTC m=+0.356378446 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, release=1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Oct 5 04:49:23 localhost podman[98014]: 2025-10-05 08:49:23.200716839 +0000 UTC m=+0.341731823 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, batch=17.1_20250721.1, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 5 04:49:23 localhost podman[98009]: 2025-10-05 08:49:23.112950406 +0000 UTC m=+0.256118250 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, release=2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:49:23 localhost podman[98014]: 2025-10-05 08:49:23.239093013 +0000 UTC m=+0.380107997 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=) Oct 5 04:49:23 localhost podman[98009]: 2025-10-05 08:49:23.248216264 +0000 UTC m=+0.391384108 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:49:23 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[98003]: 2025-10-05 08:49:23.254829436 +0000 UTC m=+0.397800904 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, release=1, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:49:23 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:49:23 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:49:23 localhost podman[97993]: 2025-10-05 08:49:23.294479876 +0000 UTC m=+0.450480833 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:49:23 localhost podman[97993]: unhealthy Oct 5 04:49:23 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:23 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:49:23 localhost podman[98003]: 2025-10-05 08:49:23.603210181 +0000 UTC m=+0.746181639 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:49:23 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:49:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:49:31 localhost podman[98160]: 2025-10-05 08:49:31.905509694 +0000 UTC m=+0.077531052 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute) Oct 5 04:49:31 localhost podman[98160]: 2025-10-05 08:49:31.934047378 +0000 UTC m=+0.106068736 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=) Oct 5 04:49:31 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:49:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:49:42 localhost podman[98185]: 2025-10-05 08:49:42.913646674 +0000 UTC m=+0.084898134 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, container_name=metrics_qdr, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:49:43 localhost podman[98185]: 2025-10-05 08:49:43.108948542 +0000 UTC m=+0.280199952 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Oct 5 04:49:43 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:49:53 localhost systemd[1]: tmp-crun.VKAeVH.mount: Deactivated successfully. Oct 5 04:49:53 localhost podman[98217]: 2025-10-05 08:49:53.938143295 +0000 UTC m=+0.091181787 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., release=1, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:49:53 localhost podman[98223]: 2025-10-05 08:49:53.979508842 +0000 UTC m=+0.131976249 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, release=2, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:49:53 localhost podman[98223]: 2025-10-05 08:49:53.990031131 +0000 UTC m=+0.142498508 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, vcs-type=git, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:49:53 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98217]: 2025-10-05 08:49:54.001603569 +0000 UTC m=+0.154642051 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, distribution-scope=public) Oct 5 04:49:54 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98214]: 2025-10-05 08:49:53.97141565 +0000 UTC m=+0.129746938 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2025-07-21T16:28:53) Oct 5 04:49:54 localhost podman[98215]: 2025-10-05 08:49:54.037365462 +0000 UTC m=+0.193406137 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:49:54 localhost podman[98218]: 2025-10-05 08:49:54.047035088 +0000 UTC m=+0.186539108 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-nova-compute-container) Oct 5 04:49:54 localhost podman[98214]: 2025-10-05 08:49:54.056113667 +0000 UTC m=+0.214444965 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53) Oct 5 04:49:54 localhost podman[98214]: unhealthy Oct 5 04:49:54 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:54 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:49:54 localhost podman[98216]: 2025-10-05 08:49:54.105661599 +0000 UTC m=+0.258773673 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:49:54 localhost podman[98229]: 2025-10-05 08:49:54.137460913 +0000 UTC m=+0.283053110 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:49:54 localhost podman[98216]: 2025-10-05 08:49:54.142169353 +0000 UTC m=+0.295281417 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, version=17.1.9, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1) Oct 5 04:49:54 localhost podman[98248]: 2025-10-05 08:49:54.150334886 +0000 UTC m=+0.287895043 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 04:49:54 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98229]: 2025-10-05 08:49:54.175490598 +0000 UTC m=+0.321082785 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, release=1, config_id=tripleo_step4, container_name=logrotate_crond, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:49:54 localhost podman[98215]: 2025-10-05 08:49:54.175704274 +0000 UTC m=+0.331744979 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:49:54 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98215]: unhealthy Oct 5 04:49:54 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:49:54 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:49:54 localhost podman[98248]: 2025-10-05 08:49:54.279759834 +0000 UTC m=+0.417320041 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, container_name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, distribution-scope=public, release=1) Oct 5 04:49:54 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:49:54 localhost podman[98218]: 2025-10-05 08:49:54.381190952 +0000 UTC m=+0.520695002 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1, version=17.1.9) Oct 5 04:49:54 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:50:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:50:02 localhost podman[98378]: 2025-10-05 08:50:02.902819953 +0000 UTC m=+0.071718852 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64) Oct 5 04:50:02 localhost podman[98378]: 2025-10-05 08:50:02.957537846 +0000 UTC m=+0.126436745 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:50:02 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:50:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:50:11 localhost recover_tripleo_nova_virtqemud[98482]: 63458 Oct 5 04:50:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:50:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:50:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:50:13 localhost podman[98483]: 2025-10-05 08:50:13.919282992 +0000 UTC m=+0.085276034 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Oct 5 04:50:14 localhost podman[98483]: 2025-10-05 08:50:14.113552622 +0000 UTC m=+0.279545604 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:50:14 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:50:24 localhost systemd[1]: tmp-crun.vxi1ys.mount: Deactivated successfully. Oct 5 04:50:24 localhost podman[98523]: 2025-10-05 08:50:24.94435714 +0000 UTC m=+0.097400598 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:50:24 localhost systemd[1]: tmp-crun.J95Sb5.mount: Deactivated successfully. Oct 5 04:50:24 localhost podman[98515]: 2025-10-05 08:50:24.993904942 +0000 UTC m=+0.154223690 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, version=17.1.9, build-date=2025-07-21T13:27:15, container_name=iscsid, config_id=tripleo_step3) Oct 5 04:50:25 localhost podman[98515]: 2025-10-05 08:50:25.005956863 +0000 UTC m=+0.166275601 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:50:25 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98535]: 2025-10-05 08:50:25.043261528 +0000 UTC m=+0.186020853 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, release=1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=) Oct 5 04:50:25 localhost podman[98535]: 2025-10-05 08:50:25.094204128 +0000 UTC m=+0.236963473 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:50:25 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98514]: 2025-10-05 08:50:25.097028745 +0000 UTC m=+0.244716636 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:50:25 localhost podman[98512]: 2025-10-05 08:50:25.148048648 +0000 UTC m=+0.312096089 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, release=1, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Oct 5 04:50:25 localhost podman[98534]: 2025-10-05 08:50:25.209243169 +0000 UTC m=+0.353557978 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:50:25 localhost podman[98534]: 2025-10-05 08:50:25.218075953 +0000 UTC m=+0.362390792 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, version=17.1.9, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:50:25 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98528]: 2025-10-05 08:50:25.260365875 +0000 UTC m=+0.408048386 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, distribution-scope=public, release=2, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:50:25 localhost podman[98512]: 2025-10-05 08:50:25.277551537 +0000 UTC m=+0.441598958 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53) Oct 5 04:50:25 localhost podman[98512]: unhealthy Oct 5 04:50:25 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:25 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:50:25 localhost podman[98528]: 2025-10-05 08:50:25.299039348 +0000 UTC m=+0.446721839 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=2, build-date=2025-07-21T13:04:03, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container) Oct 5 04:50:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98514]: 2025-10-05 08:50:25.327309145 +0000 UTC m=+0.474997036 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1, vcs-type=git) Oct 5 04:50:25 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98523]: 2025-10-05 08:50:25.359244583 +0000 UTC m=+0.512288111 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, config_id=tripleo_step4, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:50:25 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:50:25 localhost podman[98513]: 2025-10-05 08:50:25.404214409 +0000 UTC m=+0.568650970 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:50:25 localhost podman[98513]: 2025-10-05 08:50:25.426309136 +0000 UTC m=+0.590745687 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Oct 5 04:50:25 localhost podman[98513]: unhealthy Oct 5 04:50:25 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:25 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:50:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:50:33 localhost podman[98683]: 2025-10-05 08:50:33.92806089 +0000 UTC m=+0.093006657 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, vcs-type=git, container_name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1) Oct 5 04:50:33 localhost podman[98683]: 2025-10-05 08:50:33.946488546 +0000 UTC m=+0.111434333 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:50:33 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:50:44 localhost podman[98709]: 2025-10-05 08:50:44.902344011 +0000 UTC m=+0.071457344 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:50:45 localhost podman[98709]: 2025-10-05 08:50:45.115545211 +0000 UTC m=+0.284658554 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1) Oct 5 04:50:45 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:50:55 localhost podman[98740]: 2025-10-05 08:50:55.94131387 +0000 UTC m=+0.104985057 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 04:50:55 localhost systemd[1]: tmp-crun.iljpI2.mount: Deactivated successfully. Oct 5 04:50:55 localhost podman[98761]: 2025-10-05 08:50:55.95663732 +0000 UTC m=+0.102260680 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:07:52, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12) Oct 5 04:50:55 localhost podman[98740]: 2025-10-05 08:50:55.981152305 +0000 UTC m=+0.144823512 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:50:55 localhost podman[98740]: unhealthy Oct 5 04:50:55 localhost podman[98761]: 2025-10-05 08:50:55.99044691 +0000 UTC m=+0.136070270 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Oct 5 04:50:55 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:55 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:50:56 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98742]: 2025-10-05 08:50:56.039703134 +0000 UTC m=+0.198089456 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12) Oct 5 04:50:56 localhost podman[98766]: 2025-10-05 08:50:55.990913402 +0000 UTC m=+0.132975875 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:50:56 localhost podman[98766]: 2025-10-05 08:50:56.075115508 +0000 UTC m=+0.217177961 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_id=tripleo_step4, release=1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:50:56 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98741]: 2025-10-05 08:50:56.09414785 +0000 UTC m=+0.255680868 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, release=1) Oct 5 04:50:56 localhost podman[98742]: 2025-10-05 08:50:56.097272536 +0000 UTC m=+0.255658888 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:45:33, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute) Oct 5 04:50:56 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98749]: 2025-10-05 08:50:56.156801482 +0000 UTC m=+0.309095236 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, container_name=nova_migration_target, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:50:56 localhost podman[98741]: 2025-10-05 08:50:56.183192387 +0000 UTC m=+0.344725335 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:50:56 localhost podman[98741]: unhealthy Oct 5 04:50:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:50:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:50:56 localhost podman[98743]: 2025-10-05 08:50:56.198981572 +0000 UTC m=+0.350729971 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true) Oct 5 04:50:56 localhost podman[98743]: 2025-10-05 08:50:56.20657103 +0000 UTC m=+0.358319459 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, container_name=iscsid, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1) Oct 5 04:50:56 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98756]: 2025-10-05 08:50:56.257466989 +0000 UTC m=+0.403806450 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, release=2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:50:56 localhost podman[98756]: 2025-10-05 08:50:56.263899696 +0000 UTC m=+0.410239177 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container) Oct 5 04:50:56 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:50:56 localhost podman[98749]: 2025-10-05 08:50:56.543382937 +0000 UTC m=+0.695676771 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1) Oct 5 04:50:56 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:50:56 localhost systemd[1]: tmp-crun.DbAsuY.mount: Deactivated successfully. Oct 5 04:51:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:51:04 localhost podman[98910]: 2025-10-05 08:51:04.911946379 +0000 UTC m=+0.079337051 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team) Oct 5 04:51:04 localhost podman[98910]: 2025-10-05 08:51:04.966383325 +0000 UTC m=+0.133773997 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:51:04 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:51:15 localhost podman[99013]: 2025-10-05 08:51:15.896411433 +0000 UTC m=+0.064037173 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1) Oct 5 04:51:16 localhost podman[99013]: 2025-10-05 08:51:16.104921866 +0000 UTC m=+0.272547626 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.33.12) Oct 5 04:51:16 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:51:26 localhost podman[99057]: 2025-10-05 08:51:26.935338241 +0000 UTC m=+0.082275602 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, tcib_managed=true, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 04:51:26 localhost podman[99042]: 2025-10-05 08:51:26.995850076 +0000 UTC m=+0.158209198 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true) Oct 5 04:51:27 localhost podman[99042]: 2025-10-05 08:51:27.040295831 +0000 UTC m=+0.202654963 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T16:28:53, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4) Oct 5 04:51:27 localhost podman[99042]: unhealthy Oct 5 04:51:27 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:27 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:51:27 localhost podman[99069]: 2025-10-05 08:51:27.056406012 +0000 UTC m=+0.198358796 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc.) Oct 5 04:51:27 localhost podman[99063]: 2025-10-05 08:51:27.110863781 +0000 UTC m=+0.254810250 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, version=17.1.9, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1) Oct 5 04:51:27 localhost podman[99069]: 2025-10-05 08:51:27.114246994 +0000 UTC m=+0.256199728 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, config_id=tripleo_step4, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1) Oct 5 04:51:27 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:51:27 localhost podman[99057]: 2025-10-05 08:51:27.125925804 +0000 UTC m=+0.272863185 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, distribution-scope=public, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, config_id=tripleo_step3, container_name=collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9) Oct 5 04:51:27 localhost podman[99063]: 2025-10-05 08:51:27.144537523 +0000 UTC m=+0.288484022 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:51:27 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:51:27 localhost podman[99045]: 2025-10-05 08:51:27.160331145 +0000 UTC m=+0.314196315 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:51:27 localhost podman[99045]: 2025-10-05 08:51:27.196141835 +0000 UTC m=+0.350006965 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:51:27 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:51:27 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:51:27 localhost podman[99043]: 2025-10-05 08:51:27.245760301 +0000 UTC m=+0.406225932 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Oct 5 04:51:27 localhost podman[99050]: 2025-10-05 08:51:27.199434165 +0000 UTC m=+0.350755935 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:51:27 localhost podman[99043]: 2025-10-05 08:51:27.285117218 +0000 UTC m=+0.445582819 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, build-date=2025-07-21T13:28:44, release=1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public) Oct 5 04:51:27 localhost podman[99043]: unhealthy Oct 5 04:51:27 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:27 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:51:27 localhost podman[99044]: 2025-10-05 08:51:27.297477276 +0000 UTC m=+0.454972046 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=ceilometer_agent_compute, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3) Oct 5 04:51:27 localhost podman[99044]: 2025-10-05 08:51:27.327188188 +0000 UTC m=+0.484682928 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.9, container_name=ceilometer_agent_compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 04:51:27 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:51:27 localhost podman[99050]: 2025-10-05 08:51:27.582256186 +0000 UTC m=+0.733577946 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:51:27 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:51:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:51:35 localhost systemd[1]: tmp-crun.vYw13r.mount: Deactivated successfully. Oct 5 04:51:35 localhost podman[99217]: 2025-10-05 08:51:35.89458998 +0000 UTC m=+0.065326989 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:51:35 localhost podman[99217]: 2025-10-05 08:51:35.949193003 +0000 UTC m=+0.119929972 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:51:35 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:51:46 localhost podman[99244]: 2025-10-05 08:51:46.9025606 +0000 UTC m=+0.076259177 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, container_name=metrics_qdr) Oct 5 04:51:47 localhost podman[99244]: 2025-10-05 08:51:47.12082015 +0000 UTC m=+0.294518757 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container) Oct 5 04:51:47 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:51:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:51:57 localhost podman[99272]: 2025-10-05 08:51:57.93759682 +0000 UTC m=+0.101864217 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:51:57 localhost podman[99272]: 2025-10-05 08:51:57.979117716 +0000 UTC m=+0.143385133 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9) Oct 5 04:51:57 localhost podman[99272]: unhealthy Oct 5 04:51:57 localhost systemd[1]: tmp-crun.QvSLNx.mount: Deactivated successfully. Oct 5 04:51:57 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:57 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:51:57 localhost podman[99275]: 2025-10-05 08:51:57.997103358 +0000 UTC m=+0.148979887 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=iscsid, release=1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:51:58 localhost podman[99275]: 2025-10-05 08:51:58.010127184 +0000 UTC m=+0.162003743 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vcs-type=git, managed_by=tripleo_ansible) Oct 5 04:51:58 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:51:58 localhost podman[99299]: 2025-10-05 08:51:58.055269309 +0000 UTC m=+0.198532242 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:51:58 localhost podman[99274]: 2025-10-05 08:51:58.102631104 +0000 UTC m=+0.260771644 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vendor=Red Hat, Inc., distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Oct 5 04:51:58 localhost podman[99299]: 2025-10-05 08:51:58.1101598 +0000 UTC m=+0.253422703 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T15:29:47, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_ipmi, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible) Oct 5 04:51:58 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:51:58 localhost podman[99294]: 2025-10-05 08:51:58.156588081 +0000 UTC m=+0.296798890 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, release=1, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 04:51:58 localhost podman[99274]: 2025-10-05 08:51:58.187148146 +0000 UTC m=+0.345288626 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, tcib_managed=true, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:51:58 localhost podman[99294]: 2025-10-05 08:51:58.195178986 +0000 UTC m=+0.335389715 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, architecture=x86_64, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:51:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:51:58 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:51:58 localhost podman[99273]: 2025-10-05 08:51:58.246214372 +0000 UTC m=+0.406902981 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, vcs-type=git, release=1, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:51:58 localhost podman[99288]: 2025-10-05 08:51:58.200013728 +0000 UTC m=+0.346494969 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.9, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, name=rhosp17/openstack-collectd, release=2, io.buildah.version=1.33.12) Oct 5 04:51:58 localhost podman[99273]: 2025-10-05 08:51:58.25930739 +0000 UTC m=+0.419996049 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1) Oct 5 04:51:58 localhost podman[99273]: unhealthy Oct 5 04:51:58 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:51:58 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:51:58 localhost podman[99281]: 2025-10-05 08:51:58.304164277 +0000 UTC m=+0.455710816 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Oct 5 04:51:58 localhost podman[99288]: 2025-10-05 08:51:58.330488477 +0000 UTC m=+0.476969748 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, managed_by=tripleo_ansible, release=2, tcib_managed=true) Oct 5 04:51:58 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:51:58 localhost podman[99281]: 2025-10-05 08:51:58.690401011 +0000 UTC m=+0.841947510 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible) Oct 5 04:51:58 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:52:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:52:06 localhost systemd[1]: tmp-crun.gN9G97.mount: Deactivated successfully. Oct 5 04:52:06 localhost podman[99447]: 2025-10-05 08:52:06.914624979 +0000 UTC m=+0.081818678 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., container_name=nova_compute, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:52:06 localhost podman[99447]: 2025-10-05 08:52:06.985449336 +0000 UTC m=+0.152642995 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Oct 5 04:52:06 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:52:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:52:10 localhost recover_tripleo_nova_virtqemud[99552]: 63458 Oct 5 04:52:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:52:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:52:17 localhost systemd[1]: tmp-crun.39be1P.mount: Deactivated successfully. Oct 5 04:52:17 localhost podman[99553]: 2025-10-05 08:52:17.924695547 +0000 UTC m=+0.094350272 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, release=1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:52:18 localhost podman[99553]: 2025-10-05 08:52:18.122619831 +0000 UTC m=+0.292274516 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, release=1, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59) Oct 5 04:52:18 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:52:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:52:28 localhost systemd[1]: tmp-crun.GNhysN.mount: Deactivated successfully. Oct 5 04:52:28 localhost podman[99584]: 2025-10-05 08:52:28.918871589 +0000 UTC m=+0.085919952 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:52:28 localhost podman[99586]: 2025-10-05 08:52:28.931588536 +0000 UTC m=+0.090939618 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=) Oct 5 04:52:28 localhost podman[99584]: 2025-10-05 08:52:28.933992002 +0000 UTC m=+0.101040365 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1) Oct 5 04:52:28 localhost podman[99584]: unhealthy Oct 5 04:52:28 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:52:28 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:52:28 localhost podman[99586]: 2025-10-05 08:52:28.949941559 +0000 UTC m=+0.109292641 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Oct 5 04:52:28 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:52:28 localhost podman[99593]: 2025-10-05 08:52:28.994248171 +0000 UTC m=+0.150528929 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:04:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, release=2, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:52:29 localhost podman[99588]: 2025-10-05 08:52:29.005980021 +0000 UTC m=+0.163786221 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:52:29 localhost podman[99593]: 2025-10-05 08:52:29.02898116 +0000 UTC m=+0.185261918 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, release=2, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, build-date=2025-07-21T13:04:03, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:52:29 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99585]: 2025-10-05 08:52:29.062313642 +0000 UTC m=+0.225196461 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44) Oct 5 04:52:29 localhost podman[99587]: 2025-10-05 08:52:29.098264716 +0000 UTC m=+0.254668227 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public) Oct 5 04:52:29 localhost podman[99595]: 2025-10-05 08:52:29.10683823 +0000 UTC m=+0.260725302 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container) Oct 5 04:52:29 localhost podman[99585]: 2025-10-05 08:52:29.128692138 +0000 UTC m=+0.291574967 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, vcs-type=git, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:52:29 localhost podman[99585]: unhealthy Oct 5 04:52:29 localhost podman[99595]: 2025-10-05 08:52:29.140286485 +0000 UTC m=+0.294173547 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, release=1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:52:29 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:52:29 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:52:29 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99587]: 2025-10-05 08:52:29.185588304 +0000 UTC m=+0.341991775 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Oct 5 04:52:29 localhost podman[99601]: 2025-10-05 08:52:29.039291703 +0000 UTC m=+0.188072477 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public) Oct 5 04:52:29 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99601]: 2025-10-05 08:52:29.223132361 +0000 UTC m=+0.371913135 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., release=1) Oct 5 04:52:29 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:52:29 localhost podman[99588]: 2025-10-05 08:52:29.336150163 +0000 UTC m=+0.493956393 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, release=1, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, managed_by=tripleo_ansible, container_name=nova_migration_target) Oct 5 04:52:29 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:52:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:52:37 localhost systemd[1]: tmp-crun.dGRW64.mount: Deactivated successfully. Oct 5 04:52:37 localhost podman[99745]: 2025-10-05 08:52:37.907023239 +0000 UTC m=+0.078944340 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, release=1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:52:37 localhost podman[99745]: 2025-10-05 08:52:37.925190886 +0000 UTC m=+0.097111957 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, distribution-scope=public, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git) Oct 5 04:52:37 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:52:48 localhost podman[99770]: 2025-10-05 08:52:48.910486836 +0000 UTC m=+0.083767762 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.9, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd) Oct 5 04:52:49 localhost podman[99770]: 2025-10-05 08:52:49.103861415 +0000 UTC m=+0.277142401 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 04:52:49 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:52:55 localhost systemd[1]: session-28.scope: Deactivated successfully. Oct 5 04:52:55 localhost systemd[1]: session-28.scope: Consumed 7min 10.431s CPU time. Oct 5 04:52:55 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Oct 5 04:52:55 localhost systemd-logind[760]: Removed session 28. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:52:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:52:59 localhost systemd[1]: tmp-crun.4wsyJ1.mount: Deactivated successfully. Oct 5 04:52:59 localhost podman[99832]: 2025-10-05 08:52:59.962418227 +0000 UTC m=+0.089268053 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Oct 5 04:52:59 localhost podman[99832]: 2025-10-05 08:52:59.979420232 +0000 UTC m=+0.106270078 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, container_name=ceilometer_agent_ipmi, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public) Oct 5 04:52:59 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:52:59 localhost podman[99800]: 2025-10-05 08:52:59.943418037 +0000 UTC m=+0.104303974 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git) Oct 5 04:53:00 localhost podman[99800]: 2025-10-05 08:53:00.021722299 +0000 UTC m=+0.182608206 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Oct 5 04:53:00 localhost podman[99800]: unhealthy Oct 5 04:53:00 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:00 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:53:00 localhost podman[99801]: 2025-10-05 08:53:00.100988918 +0000 UTC m=+0.256794526 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, release=1) Oct 5 04:53:00 localhost podman[99820]: 2025-10-05 08:53:00.067947194 +0000 UTC m=+0.210654623 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, release=2, architecture=x86_64, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 5 04:53:00 localhost podman[99820]: 2025-10-05 08:53:00.146847172 +0000 UTC m=+0.289554621 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=2, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:53:00 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:53:00 localhost podman[99801]: 2025-10-05 08:53:00.175180516 +0000 UTC m=+0.330986184 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, architecture=x86_64, distribution-scope=public) Oct 5 04:53:00 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:53:00 localhost podman[99805]: 2025-10-05 08:53:00.149970917 +0000 UTC m=+0.300764207 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, release=1, architecture=x86_64, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:53:00 localhost podman[99799]: 2025-10-05 08:53:00.148825226 +0000 UTC m=+0.310894385 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:53:00 localhost podman[99827]: 2025-10-05 08:53:00.260725727 +0000 UTC m=+0.396175288 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, container_name=logrotate_crond, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, release=1, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, name=rhosp17/openstack-cron) Oct 5 04:53:00 localhost podman[99799]: 2025-10-05 08:53:00.279611253 +0000 UTC m=+0.441680422 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9) Oct 5 04:53:00 localhost podman[99799]: unhealthy Oct 5 04:53:00 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:00 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:53:00 localhost podman[99827]: 2025-10-05 08:53:00.293807541 +0000 UTC m=+0.429257142 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=logrotate_crond, distribution-scope=public, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 04:53:00 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:53:00 localhost podman[99805]: 2025-10-05 08:53:00.332049687 +0000 UTC m=+0.482842927 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:53:00 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:53:00 localhost podman[99814]: 2025-10-05 08:53:00.40491733 +0000 UTC m=+0.550916600 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true) Oct 5 04:53:00 localhost podman[99814]: 2025-10-05 08:53:00.796577934 +0000 UTC m=+0.942577164 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., release=1, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:53:00 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:53:05 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 5 04:53:05 localhost systemd[35815]: Activating special unit Exit the Session... Oct 5 04:53:05 localhost systemd[35815]: Removed slice User Background Tasks Slice. Oct 5 04:53:05 localhost systemd[35815]: Stopped target Main User Target. Oct 5 04:53:05 localhost systemd[35815]: Stopped target Basic System. Oct 5 04:53:05 localhost systemd[35815]: Stopped target Paths. Oct 5 04:53:05 localhost systemd[35815]: Stopped target Sockets. Oct 5 04:53:05 localhost systemd[35815]: Stopped target Timers. Oct 5 04:53:05 localhost systemd[35815]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 04:53:05 localhost systemd[35815]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 04:53:05 localhost systemd[35815]: Closed D-Bus User Message Bus Socket. Oct 5 04:53:05 localhost systemd[35815]: Stopped Create User's Volatile Files and Directories. Oct 5 04:53:05 localhost systemd[35815]: Removed slice User Application Slice. Oct 5 04:53:05 localhost systemd[35815]: Reached target Shutdown. Oct 5 04:53:05 localhost systemd[35815]: Finished Exit the Session. Oct 5 04:53:05 localhost systemd[35815]: Reached target Exit the Session. Oct 5 04:53:05 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 5 04:53:05 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 5 04:53:05 localhost systemd[1]: user@1003.service: Consumed 3.842s CPU time, read 0B from disk, written 7.0K to disk. Oct 5 04:53:05 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 5 04:53:05 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 5 04:53:05 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 5 04:53:05 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 5 04:53:05 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 5 04:53:05 localhost systemd[1]: user-1003.slice: Consumed 7min 14.297s CPU time. Oct 5 04:53:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:53:08 localhost podman[99979]: 2025-10-05 08:53:08.947586428 +0000 UTC m=+0.094985660 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible) Oct 5 04:53:08 localhost podman[99979]: 2025-10-05 08:53:08.977173707 +0000 UTC m=+0.124572929 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:53:08 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:53:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:53:19 localhost podman[100082]: 2025-10-05 08:53:19.917755873 +0000 UTC m=+0.082845427 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 04:53:20 localhost podman[100082]: 2025-10-05 08:53:20.13299137 +0000 UTC m=+0.298080924 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Oct 5 04:53:20 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:53:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:53:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:53:30 localhost recover_tripleo_nova_virtqemud[100162]: 63458 Oct 5 04:53:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:53:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:53:30 localhost podman[100120]: 2025-10-05 08:53:30.951124407 +0000 UTC m=+0.101127277 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 04:53:30 localhost podman[100120]: 2025-10-05 08:53:30.984408208 +0000 UTC m=+0.134411078 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15) Oct 5 04:53:31 localhost podman[100134]: 2025-10-05 08:53:31.001739022 +0000 UTC m=+0.142823638 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, version=17.1.9, container_name=logrotate_crond, architecture=x86_64) Oct 5 04:53:31 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100134]: 2025-10-05 08:53:31.039129464 +0000 UTC m=+0.180214070 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true) Oct 5 04:53:31 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100125]: 2025-10-05 08:53:31.050718182 +0000 UTC m=+0.196835916 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Oct 5 04:53:31 localhost podman[100112]: 2025-10-05 08:53:31.102906729 +0000 UTC m=+0.264974269 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 5 04:53:31 localhost podman[100112]: 2025-10-05 08:53:31.117829657 +0000 UTC m=+0.279897197 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Oct 5 04:53:31 localhost podman[100112]: unhealthy Oct 5 04:53:31 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:31 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:53:31 localhost podman[100113]: 2025-10-05 08:53:31.15123386 +0000 UTC m=+0.309802034 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64) Oct 5 04:53:31 localhost podman[100113]: 2025-10-05 08:53:31.19326987 +0000 UTC m=+0.351838084 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Oct 5 04:53:31 localhost podman[100113]: unhealthy Oct 5 04:53:31 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:53:31 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:53:31 localhost podman[100128]: 2025-10-05 08:53:31.213508414 +0000 UTC m=+0.358545518 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, vcs-type=git, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:04:03, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=2) Oct 5 04:53:31 localhost podman[100139]: 2025-10-05 08:53:30.970785355 +0000 UTC m=+0.108026726 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, architecture=x86_64, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:53:31 localhost podman[100128]: 2025-10-05 08:53:31.226351055 +0000 UTC m=+0.371388089 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, version=17.1.9, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, name=rhosp17/openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Oct 5 04:53:31 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100114]: 2025-10-05 08:53:31.193874507 +0000 UTC m=+0.348618437 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible) Oct 5 04:53:31 localhost podman[100139]: 2025-10-05 08:53:31.252243994 +0000 UTC m=+0.389485355 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1, distribution-scope=public, architecture=x86_64) Oct 5 04:53:31 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100114]: 2025-10-05 08:53:31.276146787 +0000 UTC m=+0.430890697 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public) Oct 5 04:53:31 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:53:31 localhost podman[100125]: 2025-10-05 08:53:31.456702567 +0000 UTC m=+0.602820301 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37) Oct 5 04:53:31 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:53:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:53:39 localhost podman[100284]: 2025-10-05 08:53:39.90384713 +0000 UTC m=+0.076456632 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, tcib_managed=true) Oct 5 04:53:39 localhost podman[100284]: 2025-10-05 08:53:39.959230045 +0000 UTC m=+0.131839507 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, tcib_managed=true, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, config_id=tripleo_step5, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:53:39 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:53:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:53:50 localhost podman[100312]: 2025-10-05 08:53:50.914740958 +0000 UTC m=+0.079644599 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Oct 5 04:53:51 localhost podman[100312]: 2025-10-05 08:53:51.10306511 +0000 UTC m=+0.267968761 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:53:51 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:54:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:54:01 localhost podman[100362]: 2025-10-05 08:54:01.943080044 +0000 UTC m=+0.091022371 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=2, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team) Oct 5 04:54:01 localhost podman[100362]: 2025-10-05 08:54:01.953024277 +0000 UTC m=+0.100966603 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20250721.1) Oct 5 04:54:01 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:54:01 localhost podman[100373]: 2025-10-05 08:54:01.997121132 +0000 UTC m=+0.131896738 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, release=1, architecture=x86_64, build-date=2025-07-21T15:29:47, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc.) Oct 5 04:54:02 localhost podman[100373]: 2025-10-05 08:54:02.022284491 +0000 UTC m=+0.157060057 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:54:02 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100344]: 2025-10-05 08:54:02.04124667 +0000 UTC m=+0.202340126 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:54:02 localhost podman[100343]: 2025-10-05 08:54:02.091244697 +0000 UTC m=+0.250241895 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:28:44, distribution-scope=public) Oct 5 04:54:02 localhost podman[100344]: 2025-10-05 08:54:02.094056364 +0000 UTC m=+0.255149790 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:54:02 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100343]: 2025-10-05 08:54:02.134994714 +0000 UTC m=+0.293991912 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, release=1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:54:02 localhost podman[100343]: unhealthy Oct 5 04:54:02 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:02 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:54:02 localhost podman[100342]: 2025-10-05 08:54:02.153375577 +0000 UTC m=+0.318311968 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:54:02 localhost podman[100342]: 2025-10-05 08:54:02.196253149 +0000 UTC m=+0.361189520 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T16:28:53, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 5 04:54:02 localhost podman[100342]: unhealthy Oct 5 04:54:02 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:02 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:54:02 localhost podman[100355]: 2025-10-05 08:54:02.269907114 +0000 UTC m=+0.416630877 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, batch=17.1_20250721.1) Oct 5 04:54:02 localhost podman[100367]: 2025-10-05 08:54:02.321138705 +0000 UTC m=+0.463851509 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, name=rhosp17/openstack-cron, container_name=logrotate_crond, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12) Oct 5 04:54:02 localhost podman[100345]: 2025-10-05 08:54:02.24195166 +0000 UTC m=+0.396406605 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, release=1) Oct 5 04:54:02 localhost podman[100367]: 2025-10-05 08:54:02.355149196 +0000 UTC m=+0.497862000 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 5 04:54:02 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100345]: 2025-10-05 08:54:02.377133407 +0000 UTC m=+0.531588362 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64) Oct 5 04:54:02 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:54:02 localhost podman[100355]: 2025-10-05 08:54:02.64411825 +0000 UTC m=+0.790842013 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git) Oct 5 04:54:02 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:54:02 localhost systemd[1]: tmp-crun.lDpFuf.mount: Deactivated successfully. Oct 5 04:54:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:54:10 localhost podman[100514]: 2025-10-05 08:54:10.905271706 +0000 UTC m=+0.077409478 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_id=tripleo_step5) Oct 5 04:54:10 localhost podman[100514]: 2025-10-05 08:54:10.936094239 +0000 UTC m=+0.108231961 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:54:10 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:54:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:54:21 localhost podman[100617]: 2025-10-05 08:54:21.920199595 +0000 UTC m=+0.087105683 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:54:22 localhost podman[100617]: 2025-10-05 08:54:22.113471542 +0000 UTC m=+0.280377620 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1, build-date=2025-07-21T13:07:59, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:54:22 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:54:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:54:32 localhost podman[100647]: 2025-10-05 08:54:32.91412351 +0000 UTC m=+0.080148533 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller) Oct 5 04:54:32 localhost systemd[1]: tmp-crun.LC20dD.mount: Deactivated successfully. Oct 5 04:54:33 localhost podman[100660]: 2025-10-05 08:54:33.019358658 +0000 UTC m=+0.173649661 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:54:33 localhost podman[100647]: 2025-10-05 08:54:33.027432239 +0000 UTC m=+0.193457302 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, io.openshift.expose-services=) Oct 5 04:54:33 localhost podman[100647]: unhealthy Oct 5 04:54:33 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:33 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:54:33 localhost podman[100667]: 2025-10-05 08:54:33.085288682 +0000 UTC m=+0.235885813 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:54:33 localhost podman[100652]: 2025-10-05 08:54:33.131341151 +0000 UTC m=+0.286718333 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, container_name=iscsid, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:54:33 localhost podman[100652]: 2025-10-05 08:54:33.146819535 +0000 UTC m=+0.302196717 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, release=1) Oct 5 04:54:33 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:54:33 localhost podman[100646]: 2025-10-05 08:54:32.951952454 +0000 UTC m=+0.116215299 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:54:33 localhost podman[100665]: 2025-10-05 08:54:33.107728996 +0000 UTC m=+0.253523176 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, tcib_managed=true, io.buildah.version=1.33.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=) Oct 5 04:54:33 localhost podman[100667]: 2025-10-05 08:54:33.167097819 +0000 UTC m=+0.317694950 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, config_id=tripleo_step4, release=1, vcs-type=git) Oct 5 04:54:33 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:54:33 localhost podman[100648]: 2025-10-05 08:54:32.979760145 +0000 UTC m=+0.141269745 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, build-date=2025-07-21T14:45:33, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, io.openshift.expose-services=) Oct 5 04:54:33 localhost podman[100646]: 2025-10-05 08:54:33.187533938 +0000 UTC m=+0.351796843 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team) Oct 5 04:54:33 localhost podman[100646]: unhealthy Oct 5 04:54:33 localhost podman[100679]: 2025-10-05 08:54:33.005985533 +0000 UTC m=+0.149517101 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:54:33 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:54:33 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:54:33 localhost podman[100648]: 2025-10-05 08:54:33.213040826 +0000 UTC m=+0.374550416 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc.) Oct 5 04:54:33 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:54:33 localhost podman[100679]: 2025-10-05 08:54:33.236145928 +0000 UTC m=+0.379677496 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:54:33 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:54:33 localhost podman[100665]: 2025-10-05 08:54:33.28813621 +0000 UTC m=+0.433930470 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:54:33 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:54:33 localhost podman[100660]: 2025-10-05 08:54:33.358144985 +0000 UTC m=+0.512435988 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9) Oct 5 04:54:33 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:54:33 localhost systemd[1]: tmp-crun.j7ROts.mount: Deactivated successfully. Oct 5 04:54:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:54:41 localhost podman[100806]: 2025-10-05 08:54:41.911473742 +0000 UTC m=+0.083015662 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Oct 5 04:54:41 localhost podman[100806]: 2025-10-05 08:54:41.943120398 +0000 UTC m=+0.114662338 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 04:54:41 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:54:51 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:54:51 localhost recover_tripleo_nova_virtqemud[100833]: 63458 Oct 5 04:54:51 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:54:51 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:54:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:54:52 localhost podman[100834]: 2025-10-05 08:54:52.916063759 +0000 UTC m=+0.084396039 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.9, config_id=tripleo_step1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Oct 5 04:54:53 localhost podman[100834]: 2025-10-05 08:54:53.139298635 +0000 UTC m=+0.307630945 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, release=1, name=rhosp17/openstack-qdrouterd, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 04:54:53 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:55:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:55:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:55:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:55:03 localhost podman[100866]: 2025-10-05 08:55:03.960924217 +0000 UTC m=+0.104508079 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 04:55:03 localhost systemd[1]: tmp-crun.ydpbI7.mount: Deactivated successfully. Oct 5 04:55:03 localhost podman[100866]: 2025-10-05 08:55:03.977140421 +0000 UTC m=+0.120724283 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1) Oct 5 04:55:04 localhost podman[100865]: 2025-10-05 08:55:03.941032773 +0000 UTC m=+0.097771465 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:55:04 localhost podman[100874]: 2025-10-05 08:55:04.053301204 +0000 UTC m=+0.198830550 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:55:04 localhost podman[100893]: 2025-10-05 08:55:04.065226191 +0000 UTC m=+0.195325964 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Oct 5 04:55:04 localhost podman[100880]: 2025-10-05 08:55:04.016701743 +0000 UTC m=+0.149778288 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.9, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, container_name=collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03) Oct 5 04:55:04 localhost podman[100866]: unhealthy Oct 5 04:55:04 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:04 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:55:04 localhost podman[100893]: 2025-10-05 08:55:04.090020118 +0000 UTC m=+0.220119911 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, architecture=x86_64, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git) Oct 5 04:55:04 localhost podman[100880]: 2025-10-05 08:55:04.104067622 +0000 UTC m=+0.237144107 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=collectd, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:55:04 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:55:04 localhost podman[100887]: 2025-10-05 08:55:04.11382274 +0000 UTC m=+0.254316068 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, container_name=logrotate_crond, architecture=x86_64, tcib_managed=true, distribution-scope=public) Oct 5 04:55:04 localhost podman[100887]: 2025-10-05 08:55:04.120453621 +0000 UTC m=+0.260946879 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:55:04 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:55:04 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:55:04 localhost podman[100868]: 2025-10-05 08:55:04.162900992 +0000 UTC m=+0.312027036 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible) Oct 5 04:55:04 localhost podman[100865]: 2025-10-05 08:55:04.174712275 +0000 UTC m=+0.331450987 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc.) Oct 5 04:55:04 localhost podman[100865]: unhealthy Oct 5 04:55:04 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:04 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:55:04 localhost podman[100868]: 2025-10-05 08:55:04.2260724 +0000 UTC m=+0.375198464 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15) Oct 5 04:55:04 localhost podman[100867]: 2025-10-05 08:55:04.226037289 +0000 UTC m=+0.377601170 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:55:04 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:55:04 localhost podman[100867]: 2025-10-05 08:55:04.308078033 +0000 UTC m=+0.459641944 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1) Oct 5 04:55:04 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:55:04 localhost podman[100874]: 2025-10-05 08:55:04.479290776 +0000 UTC m=+0.624820062 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_migration_target) Oct 5 04:55:04 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:55:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 04:55:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5653 writes, 24K keys, 5653 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5653 writes, 719 syncs, 7.86 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8 writes, 16 keys, 8 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 8 writes, 4 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 04:55:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:55:12 localhost podman[101029]: 2025-10-05 08:55:12.913417924 +0000 UTC m=+0.081541291 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, container_name=nova_compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Oct 5 04:55:12 localhost podman[101029]: 2025-10-05 08:55:12.943505808 +0000 UTC m=+0.111629135 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:55:12 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:55:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:55:23 localhost systemd[1]: tmp-crun.ld77YF.mount: Deactivated successfully. Oct 5 04:55:23 localhost podman[101131]: 2025-10-05 08:55:23.910727813 +0000 UTC m=+0.085577582 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=) Oct 5 04:55:24 localhost podman[101131]: 2025-10-05 08:55:24.101501631 +0000 UTC m=+0.276351360 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, tcib_managed=true) Oct 5 04:55:24 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:55:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:55:34 localhost systemd[1]: tmp-crun.tX8DxT.mount: Deactivated successfully. Oct 5 04:55:34 localhost podman[101159]: 2025-10-05 08:55:34.924954593 +0000 UTC m=+0.091003310 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 04:55:34 localhost podman[101159]: 2025-10-05 08:55:34.963167159 +0000 UTC m=+0.129215916 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:55:34 localhost systemd[1]: tmp-crun.vF3XKH.mount: Deactivated successfully. Oct 5 04:55:34 localhost podman[101159]: unhealthy Oct 5 04:55:34 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:34 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:55:35 localhost podman[101168]: 2025-10-05 08:55:35.030081709 +0000 UTC m=+0.179990324 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:55:35 localhost podman[101160]: 2025-10-05 08:55:35.081189877 +0000 UTC m=+0.241119936 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:55:35 localhost podman[101167]: 2025-10-05 08:55:35.095036146 +0000 UTC m=+0.253013682 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, container_name=iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:55:35 localhost podman[101167]: 2025-10-05 08:55:35.106023426 +0000 UTC m=+0.264000952 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible) Oct 5 04:55:35 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:55:35 localhost podman[101160]: 2025-10-05 08:55:35.121089789 +0000 UTC m=+0.281019858 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:55:35 localhost podman[101160]: unhealthy Oct 5 04:55:35 localhost podman[101173]: 2025-10-05 08:55:34.978145728 +0000 UTC m=+0.123776586 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, managed_by=tripleo_ansible, release=2, vendor=Red Hat, Inc.) Oct 5 04:55:35 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:55:35 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:55:35 localhost podman[101185]: 2025-10-05 08:55:35.150617136 +0000 UTC m=+0.291748901 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., container_name=logrotate_crond, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:55:35 localhost podman[101161]: 2025-10-05 08:55:35.153966648 +0000 UTC m=+0.312107088 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, distribution-scope=public, version=17.1.9, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:55:35 localhost podman[101187]: 2025-10-05 08:55:35.005561048 +0000 UTC m=+0.147424793 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, release=1, architecture=x86_64) Oct 5 04:55:35 localhost podman[101185]: 2025-10-05 08:55:35.159207741 +0000 UTC m=+0.300339526 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, release=1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:55:35 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:55:35 localhost podman[101161]: 2025-10-05 08:55:35.184078121 +0000 UTC m=+0.342218581 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:55:35 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:55:35 localhost podman[101173]: 2025-10-05 08:55:35.209570419 +0000 UTC m=+0.355201267 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:55:35 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:55:35 localhost podman[101187]: 2025-10-05 08:55:35.238211532 +0000 UTC m=+0.380075237 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:55:35 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:55:35 localhost podman[101168]: 2025-10-05 08:55:35.372142415 +0000 UTC m=+0.522051070 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1) Oct 5 04:55:35 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:55:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:55:43 localhost systemd[1]: tmp-crun.oBXwEj.mount: Deactivated successfully. Oct 5 04:55:43 localhost podman[101329]: 2025-10-05 08:55:43.922217704 +0000 UTC m=+0.093008495 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, container_name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team) Oct 5 04:55:43 localhost podman[101329]: 2025-10-05 08:55:43.951181947 +0000 UTC m=+0.121972728 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:55:43 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:55:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:55:54 localhost recover_tripleo_nova_virtqemud[101360]: 63458 Oct 5 04:55:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:55:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:55:54 localhost podman[101355]: 2025-10-05 08:55:54.91801331 +0000 UTC m=+0.084610525 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, container_name=metrics_qdr, version=17.1.9, release=1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1) Oct 5 04:55:55 localhost podman[101355]: 2025-10-05 08:55:55.138279845 +0000 UTC m=+0.304877030 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Oct 5 04:55:55 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:56:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:56:05 localhost systemd[1]: tmp-crun.Tiob9U.mount: Deactivated successfully. Oct 5 04:56:05 localhost podman[101410]: 2025-10-05 08:56:05.95195727 +0000 UTC m=+0.093979971 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:56:05 localhost podman[101398]: 2025-10-05 08:56:05.958012225 +0000 UTC m=+0.106349660 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-type=git) Oct 5 04:56:05 localhost podman[101410]: 2025-10-05 08:56:05.961079449 +0000 UTC m=+0.103102130 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., container_name=logrotate_crond, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Oct 5 04:56:05 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:56:05 localhost podman[101390]: 2025-10-05 08:56:05.992804528 +0000 UTC m=+0.147383033 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, io.buildah.version=1.33.12, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, architecture=x86_64) Oct 5 04:56:06 localhost podman[101421]: 2025-10-05 08:56:06.006574684 +0000 UTC m=+0.139921588 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, vcs-type=git, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:56:06 localhost podman[101390]: 2025-10-05 08:56:06.012971309 +0000 UTC m=+0.167549834 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public) Oct 5 04:56:06 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:56:06 localhost podman[101421]: 2025-10-05 08:56:06.022372276 +0000 UTC m=+0.155719190 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 04:56:06 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:56:06 localhost podman[101388]: 2025-10-05 08:56:06.053891408 +0000 UTC m=+0.213746088 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Oct 5 04:56:06 localhost podman[101389]: 2025-10-05 08:56:06.109299493 +0000 UTC m=+0.265203184 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12) Oct 5 04:56:06 localhost podman[101389]: 2025-10-05 08:56:06.123196814 +0000 UTC m=+0.279100455 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 04:56:06 localhost podman[101389]: unhealthy Oct 5 04:56:06 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:06 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:56:06 localhost podman[101405]: 2025-10-05 08:56:06.168998787 +0000 UTC m=+0.312158570 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-collectd-container, vcs-type=git, container_name=collectd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=2, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:56:06 localhost podman[101391]: 2025-10-05 08:56:06.126642118 +0000 UTC m=+0.276920355 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:56:06 localhost podman[101405]: 2025-10-05 08:56:06.181005935 +0000 UTC m=+0.324165638 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, release=2, build-date=2025-07-21T13:04:03, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2) Oct 5 04:56:06 localhost podman[101388]: 2025-10-05 08:56:06.1907016 +0000 UTC m=+0.350556290 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=) Oct 5 04:56:06 localhost podman[101388]: unhealthy Oct 5 04:56:06 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:06 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:56:06 localhost podman[101391]: 2025-10-05 08:56:06.212973859 +0000 UTC m=+0.363252056 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9, tcib_managed=true) Oct 5 04:56:06 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:56:06 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:56:06 localhost podman[101398]: 2025-10-05 08:56:06.314123676 +0000 UTC m=+0.462461111 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public) Oct 5 04:56:06 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:56:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:56:14 localhost systemd[1]: tmp-crun.yEDyaD.mount: Deactivated successfully. Oct 5 04:56:14 localhost podman[101555]: 2025-10-05 08:56:14.909524606 +0000 UTC m=+0.082086497 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:56:14 localhost podman[101555]: 2025-10-05 08:56:14.938098507 +0000 UTC m=+0.110660398 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:56:14 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:56:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:56:25 localhost systemd[1]: tmp-crun.dk2sVs.mount: Deactivated successfully. Oct 5 04:56:25 localhost podman[101659]: 2025-10-05 08:56:25.926970102 +0000 UTC m=+0.091795682 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-type=git) Oct 5 04:56:26 localhost podman[101659]: 2025-10-05 08:56:26.113048672 +0000 UTC m=+0.277874242 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1, build-date=2025-07-21T13:07:59, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, managed_by=tripleo_ansible) Oct 5 04:56:26 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:56:36 localhost systemd[1]: tmp-crun.tDTdcY.mount: Deactivated successfully. Oct 5 04:56:36 localhost podman[101689]: 2025-10-05 08:56:36.93746858 +0000 UTC m=+0.100360426 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, io.openshift.expose-services=) Oct 5 04:56:36 localhost podman[101689]: 2025-10-05 08:56:36.98245923 +0000 UTC m=+0.145351066 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public) Oct 5 04:56:36 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:56:36 localhost podman[101687]: 2025-10-05 08:56:36.995892968 +0000 UTC m=+0.160698077 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9) Oct 5 04:56:37 localhost podman[101702]: 2025-10-05 08:56:36.960177261 +0000 UTC m=+0.104177950 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container) Oct 5 04:56:37 localhost podman[101687]: 2025-10-05 08:56:37.037297311 +0000 UTC m=+0.202102420 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 04:56:37 localhost podman[101687]: unhealthy Oct 5 04:56:37 localhost podman[101692]: 2025-10-05 08:56:37.048205148 +0000 UTC m=+0.200233487 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, release=2, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 04:56:37 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:37 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:56:37 localhost podman[101690]: 2025-10-05 08:56:37.110235296 +0000 UTC m=+0.262042119 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:56:37 localhost podman[101688]: 2025-10-05 08:56:36.978563024 +0000 UTC m=+0.143198689 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:56:37 localhost podman[101692]: 2025-10-05 08:56:37.137172002 +0000 UTC m=+0.289200351 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=2, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-collectd, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9) Oct 5 04:56:37 localhost podman[101702]: 2025-10-05 08:56:37.147260048 +0000 UTC m=+0.291260677 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4) Oct 5 04:56:37 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:56:37 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:56:37 localhost podman[101688]: 2025-10-05 08:56:37.163109511 +0000 UTC m=+0.327745196 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:56:37 localhost podman[101688]: unhealthy Oct 5 04:56:37 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:56:37 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:56:37 localhost podman[101691]: 2025-10-05 08:56:37.030644428 +0000 UTC m=+0.188690352 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:56:37 localhost podman[101708]: 2025-10-05 08:56:37.0899547 +0000 UTC m=+0.243706397 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, build-date=2025-07-21T15:29:47) Oct 5 04:56:37 localhost podman[101690]: 2025-10-05 08:56:37.19815063 +0000 UTC m=+0.349957493 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, container_name=iscsid) Oct 5 04:56:37 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:56:37 localhost podman[101708]: 2025-10-05 08:56:37.218428965 +0000 UTC m=+0.372180662 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 04:56:37 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:56:37 localhost podman[101691]: 2025-10-05 08:56:37.38823349 +0000 UTC m=+0.546279464 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:56:37 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:56:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:56:45 localhost podman[101848]: 2025-10-05 08:56:45.90914998 +0000 UTC m=+0.076113662 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step5, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute) Oct 5 04:56:45 localhost podman[101848]: 2025-10-05 08:56:45.965170183 +0000 UTC m=+0.132133855 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:56:45 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:56:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:56:56 localhost podman[101874]: 2025-10-05 08:56:56.911608749 +0000 UTC m=+0.080304228 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, architecture=x86_64, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc.) Oct 5 04:56:57 localhost podman[101874]: 2025-10-05 08:56:57.129956611 +0000 UTC m=+0.298652100 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9) Oct 5 04:56:57 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:57:07 localhost systemd[1]: tmp-crun.ynoMQ5.mount: Deactivated successfully. Oct 5 04:57:07 localhost podman[101905]: 2025-10-05 08:57:07.94085046 +0000 UTC m=+0.102191115 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Oct 5 04:57:07 localhost podman[101913]: 2025-10-05 08:57:07.953028413 +0000 UTC m=+0.101061595 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public) Oct 5 04:57:07 localhost podman[101905]: 2025-10-05 08:57:07.990145039 +0000 UTC m=+0.151485694 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, name=rhosp17/openstack-ceilometer-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64) Oct 5 04:57:08 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:57:08 localhost podman[101903]: 2025-10-05 08:57:07.998261961 +0000 UTC m=+0.165423287 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 04:57:08 localhost podman[101904]: 2025-10-05 08:57:08.056211036 +0000 UTC m=+0.220712049 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1) Oct 5 04:57:08 localhost podman[101903]: 2025-10-05 08:57:08.0812397 +0000 UTC m=+0.248401016 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true) Oct 5 04:57:08 localhost podman[101904]: 2025-10-05 08:57:08.101114924 +0000 UTC m=+0.265615907 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 04:57:08 localhost podman[101904]: unhealthy Oct 5 04:57:08 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:08 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:57:08 localhost podman[101931]: 2025-10-05 08:57:08.105246737 +0000 UTC m=+0.246431291 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, maintainer=OpenStack TripleO Team) Oct 5 04:57:08 localhost podman[101920]: 2025-10-05 08:57:08.160530249 +0000 UTC m=+0.308174401 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, container_name=collectd, maintainer=OpenStack TripleO Team, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, distribution-scope=public, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12) Oct 5 04:57:08 localhost podman[101920]: 2025-10-05 08:57:08.198065116 +0000 UTC m=+0.345709228 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Oct 5 04:57:08 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:57:08 localhost podman[101926]: 2025-10-05 08:57:08.212339716 +0000 UTC m=+0.355887635 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-cron, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:57:08 localhost podman[101926]: 2025-10-05 08:57:08.244140206 +0000 UTC m=+0.387688155 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:57:08 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:57:08 localhost podman[101906]: 2025-10-05 08:57:08.266407135 +0000 UTC m=+0.420460052 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container) Oct 5 04:57:08 localhost podman[101906]: 2025-10-05 08:57:08.278132716 +0000 UTC m=+0.432185633 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:57:08 localhost podman[101931]: 2025-10-05 08:57:08.287642336 +0000 UTC m=+0.428826930 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, version=17.1.9, container_name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1) Oct 5 04:57:08 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:57:08 localhost podman[101913]: 2025-10-05 08:57:08.301239248 +0000 UTC m=+0.449272450 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.) Oct 5 04:57:08 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:57:08 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:57:08 localhost podman[101903]: unhealthy Oct 5 04:57:08 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:08 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:57:08 localhost systemd[1]: tmp-crun.hp0BRb.mount: Deactivated successfully. Oct 5 04:57:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:57:16 localhost podman[102073]: 2025-10-05 08:57:16.906792205 +0000 UTC m=+0.079888217 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 04:57:16 localhost podman[102073]: 2025-10-05 08:57:16.935199842 +0000 UTC m=+0.108295864 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, release=1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.component=openstack-nova-compute-container) Oct 5 04:57:16 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:57:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:57:27 localhost podman[102225]: 2025-10-05 08:57:27.916181942 +0000 UTC m=+0.081631115 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, release=1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 5 04:57:28 localhost podman[102225]: 2025-10-05 08:57:28.120460958 +0000 UTC m=+0.285910131 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, version=17.1.9, config_id=tripleo_step1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, release=1, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 5 04:57:28 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:57:35 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=167.94.138.199 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=12102 DF PROTO=TCP SPT=36374 DPT=19885 SEQ=1225442296 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AF75BDA8E000000000103030A) Oct 5 04:57:36 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=167.94.138.199 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=12103 DF PROTO=TCP SPT=36374 DPT=19885 SEQ=1225442296 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AF75BDE7A000000000103030A) Oct 5 04:57:36 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=167.94.138.199 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=39537 DF PROTO=TCP SPT=36380 DPT=19885 SEQ=3673041361 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AF75BE0A2000000000103030A) Oct 5 04:57:37 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=167.94.138.199 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=5316 DF PROTO=TCP SPT=39332 DPT=19885 SEQ=1519695458 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AF75BE493000000000103030A) Oct 5 04:57:38 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=167.94.138.199 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=5317 DF PROTO=TCP SPT=39332 DPT=19885 SEQ=1519695458 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AF75BE8B9000000000103030A) Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:57:39 localhost podman[102257]: 2025-10-05 08:57:38.970552188 +0000 UTC m=+0.118749260 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:57:39 localhost podman[102268]: 2025-10-05 08:57:38.937390841 +0000 UTC m=+0.080505684 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, release=1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12) Oct 5 04:57:39 localhost podman[102255]: 2025-10-05 08:57:38.997375041 +0000 UTC m=+0.155855863 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true) Oct 5 04:57:39 localhost podman[102257]: 2025-10-05 08:57:39.056247281 +0000 UTC m=+0.204444303 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:57:39 localhost podman[102255]: 2025-10-05 08:57:39.076848225 +0000 UTC m=+0.235329057 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:57:39 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102256]: 2025-10-05 08:57:39.10629287 +0000 UTC m=+0.262264265 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 04:57:39 localhost podman[102255]: unhealthy Oct 5 04:57:39 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:39 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:57:39 localhost podman[102256]: 2025-10-05 08:57:39.149248845 +0000 UTC m=+0.305220220 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9) Oct 5 04:57:39 localhost podman[102256]: unhealthy Oct 5 04:57:39 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:57:39 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:57:39 localhost podman[102278]: 2025-10-05 08:57:39.213192324 +0000 UTC m=+0.348566455 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:07:52, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:57:39 localhost podman[102278]: 2025-10-05 08:57:39.245035986 +0000 UTC m=+0.380410117 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 04:57:39 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102268]: 2025-10-05 08:57:39.316332696 +0000 UTC m=+0.459447579 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, release=1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vcs-type=git) Oct 5 04:57:39 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102269]: 2025-10-05 08:57:39.026290492 +0000 UTC m=+0.162048143 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.buildah.version=1.33.12, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, release=2, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, container_name=collectd, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:57:39 localhost podman[102269]: 2025-10-05 08:57:39.361535412 +0000 UTC m=+0.497293033 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:57:39 localhost podman[102284]: 2025-10-05 08:57:39.369970833 +0000 UTC m=+0.502160577 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, release=1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-07-21T15:29:47, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public) Oct 5 04:57:39 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102284]: 2025-10-05 08:57:39.424377301 +0000 UTC m=+0.556566975 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:57:39 localhost podman[102263]: 2025-10-05 08:57:39.321800385 +0000 UTC m=+0.464116406 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:27:15, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1) Oct 5 04:57:39 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:57:39 localhost podman[102263]: 2025-10-05 08:57:39.507210977 +0000 UTC m=+0.649527028 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1) Oct 5 04:57:39 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:57:44 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.219 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=5481 DF PROTO=TCP SPT=42698 DPT=19885 SEQ=2466692858 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A909A9D5B000000000103030A) Oct 5 04:57:45 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.219 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=5482 DF PROTO=TCP SPT=42698 DPT=19885 SEQ=2466692858 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A909AA15A000000000103030A) Oct 5 04:57:46 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.219 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=56301 DF PROTO=TCP SPT=42712 DPT=19885 SEQ=1236196134 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A909AA44C000000000103030A) Oct 5 04:57:47 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.219 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=56302 DF PROTO=TCP SPT=42712 DPT=19885 SEQ=1236196134 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A909AA85A000000000103030A) Oct 5 04:57:47 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.219 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=11940 DF PROTO=TCP SPT=42738 DPT=19885 SEQ=2037334047 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A909AA995000000000103030A) Oct 5 04:57:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:57:47 localhost systemd[1]: tmp-crun.3qr8wN.mount: Deactivated successfully. Oct 5 04:57:47 localhost podman[102426]: 2025-10-05 08:57:47.916211438 +0000 UTC m=+0.085715966 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, container_name=nova_compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:57:47 localhost podman[102426]: 2025-10-05 08:57:47.945994063 +0000 UTC m=+0.115498581 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 5 04:57:47 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:57:48 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.219 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=11941 DF PROTO=TCP SPT=42738 DPT=19885 SEQ=2037334047 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A909AAD9A000000000103030A) Oct 5 04:57:56 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.59 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=18891 DF PROTO=TCP SPT=53158 DPT=19885 SEQ=1014199465 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A62100581000000000103030A) Oct 5 04:57:57 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.59 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=18892 DF PROTO=TCP SPT=53158 DPT=19885 SEQ=1014199465 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A6210099A000000000103030A) Oct 5 04:57:57 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.59 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=10957 DF PROTO=TCP SPT=53168 DPT=19885 SEQ=931138237 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A62100A3B000000000103030A) Oct 5 04:57:58 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.59 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=10958 DF PROTO=TCP SPT=53168 DPT=19885 SEQ=931138237 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A62100E5A000000000103030A) Oct 5 04:57:58 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.59 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=18773 DF PROTO=TCP SPT=53170 DPT=19885 SEQ=2071035136 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A62100E80000000000103030A) Oct 5 04:57:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:57:58 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:57:58 localhost recover_tripleo_nova_virtqemud[102458]: 63458 Oct 5 04:57:58 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:57:58 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:57:58 localhost podman[102451]: 2025-10-05 08:57:58.934106409 +0000 UTC m=+0.107542353 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-07-21T13:07:59, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step1) Oct 5 04:57:59 localhost podman[102451]: 2025-10-05 08:57:59.137273685 +0000 UTC m=+0.310709689 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, build-date=2025-07-21T13:07:59, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 04:57:59 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:57:59 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=206.168.34.59 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=18774 DF PROTO=TCP SPT=53170 DPT=19885 SEQ=2071035136 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080A6210129B000000000103030A) Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:58:09 localhost podman[102504]: 2025-10-05 08:58:09.922882935 +0000 UTC m=+0.074916940 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Oct 5 04:58:09 localhost systemd[1]: tmp-crun.MZZDSl.mount: Deactivated successfully. Oct 5 04:58:09 localhost podman[102504]: 2025-10-05 08:58:09.979439832 +0000 UTC m=+0.131473837 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20250721.1, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:58:09 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:58:10 localhost podman[102485]: 2025-10-05 08:58:10.026817028 +0000 UTC m=+0.182776761 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:58:10 localhost podman[102482]: 2025-10-05 08:58:09.983500093 +0000 UTC m=+0.147109405 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, version=17.1.9) Oct 5 04:58:10 localhost podman[102482]: 2025-10-05 08:58:10.066311248 +0000 UTC m=+0.229920600 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1) Oct 5 04:58:10 localhost podman[102482]: unhealthy Oct 5 04:58:10 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:10 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:58:10 localhost podman[102483]: 2025-10-05 08:58:10.081921605 +0000 UTC m=+0.247155512 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, release=1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:58:10 localhost podman[102483]: 2025-10-05 08:58:10.123015809 +0000 UTC m=+0.288249726 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, container_name=ovn_controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:58:10 localhost podman[102483]: unhealthy Oct 5 04:58:10 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:10 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:58:10 localhost podman[102493]: 2025-10-05 08:58:10.137161446 +0000 UTC m=+0.292190433 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., distribution-scope=public, container_name=collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 04:58:10 localhost podman[102493]: 2025-10-05 08:58:10.172156203 +0000 UTC m=+0.327185170 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Oct 5 04:58:10 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:58:10 localhost podman[102486]: 2025-10-05 08:58:10.194170815 +0000 UTC m=+0.350030105 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, vcs-type=git, container_name=nova_migration_target) Oct 5 04:58:10 localhost podman[102484]: 2025-10-05 08:58:10.246939579 +0000 UTC m=+0.404761142 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:58:10 localhost podman[102499]: 2025-10-05 08:58:10.29595523 +0000 UTC m=+0.447406319 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:58:10 localhost podman[102485]: 2025-10-05 08:58:10.310992941 +0000 UTC m=+0.466952664 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 04:58:10 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:58:10 localhost podman[102499]: 2025-10-05 08:58:10.334671009 +0000 UTC m=+0.486122118 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4) Oct 5 04:58:10 localhost podman[102484]: 2025-10-05 08:58:10.348747954 +0000 UTC m=+0.506569547 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, version=17.1.9, container_name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:58:10 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:58:10 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:58:10 localhost podman[102486]: 2025-10-05 08:58:10.573133871 +0000 UTC m=+0.728993111 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container) Oct 5 04:58:10 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:58:10 localhost systemd[1]: tmp-crun.1RZMjy.mount: Deactivated successfully. Oct 5 04:58:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:58:18 localhost podman[102652]: 2025-10-05 08:58:18.909043761 +0000 UTC m=+0.076472952 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 04:58:18 localhost podman[102652]: 2025-10-05 08:58:18.9371371 +0000 UTC m=+0.104566281 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, container_name=nova_compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 04:58:18 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:58:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:58:29 localhost podman[102757]: 2025-10-05 08:58:29.910353259 +0000 UTC m=+0.079486515 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team) Oct 5 04:58:30 localhost podman[102757]: 2025-10-05 08:58:30.098476625 +0000 UTC m=+0.267609861 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-qdrouterd, release=1, version=17.1.9, architecture=x86_64) Oct 5 04:58:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:58:40 localhost podman[102786]: 2025-10-05 08:58:40.958029604 +0000 UTC m=+0.108100178 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 04:58:40 localhost podman[102785]: 2025-10-05 08:58:40.987518751 +0000 UTC m=+0.148536964 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, release=1, version=17.1.9, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 04:58:40 localhost podman[102799]: 2025-10-05 08:58:40.996303741 +0000 UTC m=+0.134380817 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, release=2, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 04:58:41 localhost podman[102785]: 2025-10-05 08:58:41.003177329 +0000 UTC m=+0.164195542 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 5 04:58:41 localhost podman[102785]: unhealthy Oct 5 04:58:41 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:41 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:58:41 localhost podman[102815]: 2025-10-05 08:58:40.978347529 +0000 UTC m=+0.112998701 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 04:58:41 localhost podman[102786]: 2025-10-05 08:58:41.032013597 +0000 UTC m=+0.182084151 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute) Oct 5 04:58:41 localhost podman[102784]: 2025-10-05 08:58:40.93814719 +0000 UTC m=+0.104510149 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:58:41 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[102797]: 2025-10-05 08:58:41.042054842 +0000 UTC m=+0.190446220 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 04:58:41 localhost podman[102799]: 2025-10-05 08:58:41.056330333 +0000 UTC m=+0.194407399 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, version=17.1.9, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Oct 5 04:58:41 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[102792]: 2025-10-05 08:58:41.101672283 +0000 UTC m=+0.251604104 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, release=1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Oct 5 04:58:41 localhost podman[102815]: 2025-10-05 08:58:41.10779813 +0000 UTC m=+0.242449342 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:58:41 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[102800]: 2025-10-05 08:58:41.14908861 +0000 UTC m=+0.293342004 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:58:41 localhost podman[102800]: 2025-10-05 08:58:41.157749256 +0000 UTC m=+0.302002680 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=logrotate_crond, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron) Oct 5 04:58:41 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[102784]: 2025-10-05 08:58:41.172118199 +0000 UTC m=+0.338481118 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 04:58:41 localhost podman[102784]: unhealthy Oct 5 04:58:41 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:58:41 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:58:41 localhost podman[102792]: 2025-10-05 08:58:41.212371281 +0000 UTC m=+0.362303092 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public) Oct 5 04:58:41 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:58:41 localhost podman[102797]: 2025-10-05 08:58:41.425238754 +0000 UTC m=+0.573630152 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, vcs-type=git, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true) Oct 5 04:58:41 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:58:41 localhost systemd[1]: tmp-crun.Ziqx1K.mount: Deactivated successfully. Oct 5 04:58:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:58:49 localhost podman[102944]: 2025-10-05 08:58:49.928117992 +0000 UTC m=+0.092144022 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, container_name=nova_compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:58:49 localhost podman[102944]: 2025-10-05 08:58:49.986298983 +0000 UTC m=+0.150324943 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, release=1) Oct 5 04:58:49 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:59:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:59:00 localhost podman[102970]: 2025-10-05 08:59:00.911413528 +0000 UTC m=+0.082881779 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, vcs-type=git, version=17.1.9) Oct 5 04:59:01 localhost podman[102970]: 2025-10-05 08:59:01.147626979 +0000 UTC m=+0.319095230 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, vcs-type=git, container_name=metrics_qdr, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=) Oct 5 04:59:01 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:59:11 localhost podman[103021]: 2025-10-05 08:59:11.980833218 +0000 UTC m=+0.113586018 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, architecture=x86_64) Oct 5 04:59:11 localhost podman[103003]: 2025-10-05 08:59:11.997586786 +0000 UTC m=+0.146426436 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:45:33, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 04:59:12 localhost podman[103001]: 2025-10-05 08:59:11.954754655 +0000 UTC m=+0.112113848 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4) Oct 5 04:59:12 localhost podman[103021]: 2025-10-05 08:59:12.043557743 +0000 UTC m=+0.176310613 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, vcs-type=git, build-date=2025-07-21T13:04:03, container_name=collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, release=2, managed_by=tripleo_ansible, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public) Oct 5 04:59:12 localhost podman[103010]: 2025-10-05 08:59:12.056918639 +0000 UTC m=+0.196780874 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target) Oct 5 04:59:12 localhost podman[103029]: 2025-10-05 08:59:12.069711389 +0000 UTC m=+0.190552623 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.9, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Oct 5 04:59:12 localhost podman[103002]: 2025-10-05 08:59:12.109887928 +0000 UTC m=+0.260294781 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4) Oct 5 04:59:12 localhost podman[103022]: 2025-10-05 08:59:12.127988633 +0000 UTC m=+0.259581201 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true) Oct 5 04:59:12 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103022]: 2025-10-05 08:59:12.164104401 +0000 UTC m=+0.295696919 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:07:52, distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Oct 5 04:59:12 localhost podman[103004]: 2025-10-05 08:59:12.175650567 +0000 UTC m=+0.307364479 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Oct 5 04:59:12 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103029]: 2025-10-05 08:59:12.184495499 +0000 UTC m=+0.305336723 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T15:29:47, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 04:59:12 localhost podman[103002]: 2025-10-05 08:59:12.198995885 +0000 UTC m=+0.349402668 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 04:59:12 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103002]: unhealthy Oct 5 04:59:12 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:12 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:59:12 localhost podman[103004]: 2025-10-05 08:59:12.236518351 +0000 UTC m=+0.368232313 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, vcs-type=git, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-iscsid) Oct 5 04:59:12 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103003]: 2025-10-05 08:59:12.250898185 +0000 UTC m=+0.399737825 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, release=1, vcs-type=git, tcib_managed=true) Oct 5 04:59:12 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:59:12 localhost podman[103001]: 2025-10-05 08:59:12.29018302 +0000 UTC m=+0.447542213 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 04:59:12 localhost podman[103001]: unhealthy Oct 5 04:59:12 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:12 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:59:12 localhost podman[103010]: 2025-10-05 08:59:12.432677837 +0000 UTC m=+0.572540092 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 5 04:59:12 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:59:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:59:20 localhost podman[103171]: 2025-10-05 08:59:20.903814817 +0000 UTC m=+0.076653887 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:59:20 localhost podman[103171]: 2025-10-05 08:59:20.9602247 +0000 UTC m=+0.133063770 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1) Oct 5 04:59:20 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 04:59:23 localhost podman[103298]: 2025-10-05 08:59:23.81919745 +0000 UTC m=+0.090887266 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553) Oct 5 04:59:23 localhost podman[103298]: 2025-10-05 08:59:23.933112726 +0000 UTC m=+0.204802512 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Oct 5 04:59:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 04:59:31 localhost systemd[1]: tmp-crun.o39rLD.mount: Deactivated successfully. Oct 5 04:59:31 localhost podman[103441]: 2025-10-05 08:59:31.929724666 +0000 UTC m=+0.098671320 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, build-date=2025-07-21T13:07:59, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=) Oct 5 04:59:32 localhost podman[103441]: 2025-10-05 08:59:32.2077218 +0000 UTC m=+0.376668464 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, container_name=metrics_qdr, maintainer=OpenStack TripleO Team) Oct 5 04:59:32 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 04:59:41 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 04:59:41 localhost recover_tripleo_nova_virtqemud[103471]: 63458 Oct 5 04:59:41 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 04:59:41 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 04:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 04:59:42 localhost podman[103485]: 2025-10-05 08:59:42.937463049 +0000 UTC m=+0.089238082 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, tcib_managed=true, release=2, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Oct 5 04:59:42 localhost podman[103481]: 2025-10-05 08:59:42.993703477 +0000 UTC m=+0.147750203 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, release=1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true) Oct 5 04:59:43 localhost podman[103474]: 2025-10-05 08:59:43.042912992 +0000 UTC m=+0.200786362 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, version=17.1.9, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, release=1) Oct 5 04:59:43 localhost podman[103494]: 2025-10-05 08:59:42.956508269 +0000 UTC m=+0.101817756 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 5 04:59:43 localhost podman[103494]: 2025-10-05 08:59:43.086177446 +0000 UTC m=+0.231486973 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T13:07:52, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public) Oct 5 04:59:43 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103473]: 2025-10-05 08:59:43.092209551 +0000 UTC m=+0.250669738 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12) Oct 5 04:59:43 localhost podman[103474]: 2025-10-05 08:59:43.170257796 +0000 UTC m=+0.328131146 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, distribution-scope=public, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4) Oct 5 04:59:43 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103503]: 2025-10-05 08:59:43.145869559 +0000 UTC m=+0.287802984 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-type=git, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-07-21T15:29:47, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f) Oct 5 04:59:43 localhost podman[103475]: 2025-10-05 08:59:43.259281841 +0000 UTC m=+0.406701586 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 04:59:43 localhost podman[103475]: 2025-10-05 08:59:43.272172073 +0000 UTC m=+0.419591858 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 04:59:43 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103485]: 2025-10-05 08:59:43.324433083 +0000 UTC m=+0.476208176 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, managed_by=tripleo_ansible, release=2, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, container_name=collectd) Oct 5 04:59:43 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103481]: 2025-10-05 08:59:43.345353215 +0000 UTC m=+0.499399951 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, release=1, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-nova-compute) Oct 5 04:59:43 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103472]: 2025-10-05 08:59:43.412384409 +0000 UTC m=+0.575420150 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T16:28:53, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git) Oct 5 04:59:43 localhost podman[103503]: 2025-10-05 08:59:43.426096374 +0000 UTC m=+0.568029849 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1) Oct 5 04:59:43 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 04:59:43 localhost podman[103472]: 2025-10-05 08:59:43.478743774 +0000 UTC m=+0.641779535 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 04:59:43 localhost podman[103472]: unhealthy Oct 5 04:59:43 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:43 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 04:59:43 localhost podman[103473]: 2025-10-05 08:59:43.531754614 +0000 UTC m=+0.690214841 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 04:59:43 localhost podman[103473]: unhealthy Oct 5 04:59:43 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 04:59:43 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 04:59:43 localhost systemd[1]: tmp-crun.tZNn2j.mount: Deactivated successfully. Oct 5 04:59:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 04:59:51 localhost systemd[1]: tmp-crun.llKQQu.mount: Deactivated successfully. Oct 5 04:59:51 localhost podman[103645]: 2025-10-05 08:59:51.907875755 +0000 UTC m=+0.075630370 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, release=1, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 04:59:51 localhost podman[103645]: 2025-10-05 08:59:51.936152208 +0000 UTC m=+0.103906903 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_id=tripleo_step5, release=1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.buildah.version=1.33.12, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1) Oct 5 04:59:51 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 05:00:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:00:02 localhost systemd[1]: tmp-crun.o7oFJ4.mount: Deactivated successfully. Oct 5 05:00:02 localhost podman[103674]: 2025-10-05 09:00:02.930574366 +0000 UTC m=+0.100480799 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, version=17.1.9, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, release=1, architecture=x86_64) Oct 5 05:00:03 localhost podman[103674]: 2025-10-05 09:00:03.125140088 +0000 UTC m=+0.295046471 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-07-21T13:07:59, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:00:03 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:00:13 localhost podman[103718]: 2025-10-05 09:00:13.943480412 +0000 UTC m=+0.083190507 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=collectd, build-date=2025-07-21T13:04:03, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=2, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:00:13 localhost podman[103718]: 2025-10-05 09:00:13.981110391 +0000 UTC m=+0.120820506 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public) Oct 5 05:00:13 localhost systemd[1]: tmp-crun.FaqdkX.mount: Deactivated successfully. Oct 5 05:00:13 localhost podman[103729]: 2025-10-05 09:00:13.991725671 +0000 UTC m=+0.137680717 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, distribution-scope=public, io.openshift.expose-services=) Oct 5 05:00:13 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:00:14 localhost podman[103729]: 2025-10-05 09:00:14.000962884 +0000 UTC m=+0.146917930 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=) Oct 5 05:00:14 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:00:14 localhost podman[103704]: 2025-10-05 09:00:13.986039506 +0000 UTC m=+0.149497150 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git) Oct 5 05:00:14 localhost podman[103705]: 2025-10-05 09:00:14.051706951 +0000 UTC m=+0.211845745 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, tcib_managed=true) Oct 5 05:00:14 localhost podman[103703]: 2025-10-05 09:00:14.089060954 +0000 UTC m=+0.254914854 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, vcs-type=git, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc.) Oct 5 05:00:14 localhost podman[103704]: 2025-10-05 09:00:14.120481263 +0000 UTC m=+0.283938887 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, architecture=x86_64, release=1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public) Oct 5 05:00:14 localhost podman[103704]: unhealthy Oct 5 05:00:14 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:14 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:00:14 localhost podman[103712]: 2025-10-05 09:00:14.152749325 +0000 UTC m=+0.304651193 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.) Oct 5 05:00:14 localhost podman[103705]: 2025-10-05 09:00:14.176618939 +0000 UTC m=+0.336757743 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, release=1, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1) Oct 5 05:00:14 localhost podman[103703]: 2025-10-05 09:00:14.177021819 +0000 UTC m=+0.342875769 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:00:14 localhost podman[103703]: unhealthy Oct 5 05:00:14 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:14 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:00:14 localhost podman[103706]: 2025-10-05 09:00:14.190759875 +0000 UTC m=+0.348329018 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, release=1, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc.) Oct 5 05:00:14 localhost podman[103706]: 2025-10-05 09:00:14.202287991 +0000 UTC m=+0.359857164 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:00:14 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:00:14 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:00:14 localhost podman[103730]: 2025-10-05 09:00:14.310715417 +0000 UTC m=+0.449715273 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, distribution-scope=public, build-date=2025-07-21T15:29:47, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc.) Oct 5 05:00:14 localhost podman[103730]: 2025-10-05 09:00:14.340051509 +0000 UTC m=+0.479051405 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1) Oct 5 05:00:14 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 05:00:14 localhost podman[103712]: 2025-10-05 09:00:14.545036105 +0000 UTC m=+0.696937953 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Oct 5 05:00:14 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:00:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:00:22 localhost podman[103877]: 2025-10-05 09:00:22.907593097 +0000 UTC m=+0.076775802 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:00:22 localhost podman[103877]: 2025-10-05 09:00:22.933863205 +0000 UTC m=+0.103045890 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step5) Oct 5 05:00:22 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 05:00:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:00:33 localhost podman[103981]: 2025-10-05 09:00:33.917580703 +0000 UTC m=+0.084066451 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1) Oct 5 05:00:34 localhost podman[103981]: 2025-10-05 09:00:34.095522769 +0000 UTC m=+0.262008537 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:59, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1) Oct 5 05:00:34 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:00:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:00:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:00:44 localhost recover_tripleo_nova_virtqemud[104061]: 63458 Oct 5 05:00:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:00:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:00:44 localhost systemd[1]: tmp-crun.zc3a5k.mount: Deactivated successfully. Oct 5 05:00:44 localhost podman[104012]: 2025-10-05 09:00:44.951610545 +0000 UTC m=+0.111017938 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:00:44 localhost podman[104013]: 2025-10-05 09:00:44.96202749 +0000 UTC m=+0.118405720 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, tcib_managed=true, vcs-type=git) Oct 5 05:00:45 localhost podman[104012]: 2025-10-05 09:00:45.015319357 +0000 UTC m=+0.174726730 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64) Oct 5 05:00:45 localhost podman[104012]: unhealthy Oct 5 05:00:45 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:45 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:00:45 localhost podman[104014]: 2025-10-05 09:00:45.001658843 +0000 UTC m=+0.152946484 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:00:45 localhost podman[104013]: 2025-10-05 09:00:45.058083538 +0000 UTC m=+0.214461768 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true) Oct 5 05:00:45 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:00:45 localhost podman[104034]: 2025-10-05 09:00:45.029361291 +0000 UTC m=+0.168936481 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Oct 5 05:00:45 localhost podman[104014]: 2025-10-05 09:00:45.131318261 +0000 UTC m=+0.282605902 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1) Oct 5 05:00:45 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:00:45 localhost podman[104011]: 2025-10-05 09:00:45.220054537 +0000 UTC m=+0.382840142 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 05:00:45 localhost podman[104044]: 2025-10-05 09:00:45.198076337 +0000 UTC m=+0.324808046 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, build-date=2025-07-21T15:29:47, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Oct 5 05:00:45 localhost podman[104020]: 2025-10-05 09:00:45.258289933 +0000 UTC m=+0.405730539 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 5 05:00:45 localhost podman[104011]: 2025-10-05 09:00:45.267231948 +0000 UTC m=+0.430017523 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Oct 5 05:00:45 localhost podman[104011]: unhealthy Oct 5 05:00:45 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:00:45 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:00:45 localhost podman[104044]: 2025-10-05 09:00:45.281159359 +0000 UTC m=+0.407891068 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12) Oct 5 05:00:45 localhost podman[104029]: 2025-10-05 09:00:45.05926746 +0000 UTC m=+0.202261074 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, release=2, config_id=tripleo_step3, container_name=collectd) Oct 5 05:00:45 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 05:00:45 localhost podman[104029]: 2025-10-05 09:00:45.318951953 +0000 UTC m=+0.461945577 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, release=2, batch=17.1_20250721.1, build-date=2025-07-21T13:04:03, container_name=collectd, distribution-scope=public) Oct 5 05:00:45 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:00:45 localhost podman[104034]: 2025-10-05 09:00:45.342790134 +0000 UTC m=+0.482365354 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:00:45 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:00:45 localhost podman[104020]: 2025-10-05 09:00:45.644450786 +0000 UTC m=+0.791891442 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:00:45 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:00:45 localhost systemd[1]: tmp-crun.ZBuiyn.mount: Deactivated successfully. Oct 5 05:00:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:00:53 localhost podman[104186]: 2025-10-05 09:00:53.91142721 +0000 UTC m=+0.079304530 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1) Oct 5 05:00:53 localhost podman[104186]: 2025-10-05 09:00:53.968487391 +0000 UTC m=+0.136364671 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Oct 5 05:00:53 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 05:01:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:01:04 localhost podman[104238]: 2025-10-05 09:01:04.909386015 +0000 UTC m=+0.077114631 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-07-21T13:07:59, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:01:05 localhost podman[104238]: 2025-10-05 09:01:05.111124853 +0000 UTC m=+0.278853449 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, container_name=metrics_qdr, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 05:01:05 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:01:07 localhost sshd[104267]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:01:07 localhost sshd[104268]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:01:15 localhost systemd[1]: tmp-crun.DzCKXB.mount: Deactivated successfully. Oct 5 05:01:15 localhost podman[104275]: 2025-10-05 09:01:15.928475586 +0000 UTC m=+0.089408357 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9) Oct 5 05:01:15 localhost podman[104276]: 2025-10-05 09:01:15.976578381 +0000 UTC m=+0.137492101 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=2, vcs-type=git) Oct 5 05:01:16 localhost podman[104276]: 2025-10-05 09:01:16.013225134 +0000 UTC m=+0.174138844 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-collectd, build-date=2025-07-21T13:04:03, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=2, managed_by=tripleo_ansible) Oct 5 05:01:16 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:01:16 localhost podman[104272]: 2025-10-05 09:01:16.026106496 +0000 UTC m=+0.193118373 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 05:01:16 localhost podman[104272]: 2025-10-05 09:01:16.06389026 +0000 UTC m=+0.230902107 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public) Oct 5 05:01:16 localhost podman[104272]: unhealthy Oct 5 05:01:16 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:16 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:01:16 localhost podman[104274]: 2025-10-05 09:01:16.075033385 +0000 UTC m=+0.241458506 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-iscsid, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible) Oct 5 05:01:16 localhost podman[104274]: 2025-10-05 09:01:16.087148356 +0000 UTC m=+0.253573487 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.buildah.version=1.33.12) Oct 5 05:01:16 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:01:16 localhost podman[104273]: 2025-10-05 09:01:16.129972677 +0000 UTC m=+0.295246886 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, build-date=2025-07-21T14:45:33, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-type=git, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 05:01:16 localhost podman[104273]: 2025-10-05 09:01:16.185086475 +0000 UTC m=+0.350360694 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, build-date=2025-07-21T14:45:33, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 05:01:16 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:01:16 localhost podman[104271]: 2025-10-05 09:01:16.238105126 +0000 UTC m=+0.404245069 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 05:01:16 localhost podman[104271]: 2025-10-05 09:01:16.277271057 +0000 UTC m=+0.443410940 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1, batch=17.1_20250721.1) Oct 5 05:01:16 localhost podman[104271]: unhealthy Oct 5 05:01:16 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:16 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:01:16 localhost podman[104283]: 2025-10-05 09:01:16.188371395 +0000 UTC m=+0.342229692 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52) Oct 5 05:01:16 localhost podman[104275]: 2025-10-05 09:01:16.324498159 +0000 UTC m=+0.485430850 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Oct 5 05:01:16 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:01:16 localhost podman[104283]: 2025-10-05 09:01:16.374387533 +0000 UTC m=+0.528245750 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, release=1, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64) Oct 5 05:01:16 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:01:16 localhost podman[104300]: 2025-10-05 09:01:16.277823122 +0000 UTC m=+0.432829821 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:29:47, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, container_name=ceilometer_agent_ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4) Oct 5 05:01:16 localhost podman[104300]: 2025-10-05 09:01:16.464162798 +0000 UTC m=+0.619169547 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Oct 5 05:01:16 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 05:01:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:01:24 localhost podman[104439]: 2025-10-05 09:01:24.914405139 +0000 UTC m=+0.082051955 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9) Oct 5 05:01:24 localhost podman[104439]: 2025-10-05 09:01:24.969253779 +0000 UTC m=+0.136900555 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, release=1, config_id=tripleo_step5, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:01:24 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 05:01:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:01:35 localhost podman[104543]: 2025-10-05 09:01:35.912749936 +0000 UTC m=+0.085273814 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:01:36 localhost podman[104543]: 2025-10-05 09:01:36.10557117 +0000 UTC m=+0.278095078 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:07:59, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true) Oct 5 05:01:36 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:01:46 localhost podman[104573]: 2025-10-05 09:01:46.938518481 +0000 UTC m=+0.102641249 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true, vendor=Red Hat, Inc.) Oct 5 05:01:46 localhost podman[104573]: 2025-10-05 09:01:46.94874879 +0000 UTC m=+0.112871588 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, distribution-scope=public, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 05:01:46 localhost podman[104573]: unhealthy Oct 5 05:01:46 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:46 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:01:46 localhost podman[104576]: 2025-10-05 09:01:46.979518091 +0000 UTC m=+0.136433162 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, container_name=iscsid, release=1, vcs-type=git, com.redhat.component=openstack-iscsid-container) Oct 5 05:01:46 localhost podman[104575]: 2025-10-05 09:01:46.997944376 +0000 UTC m=+0.154983810 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:45:33, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Oct 5 05:01:47 localhost podman[104575]: 2025-10-05 09:01:47.045168198 +0000 UTC m=+0.202207632 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-07-21T14:45:33, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute) Oct 5 05:01:47 localhost podman[104588]: 2025-10-05 09:01:47.05073608 +0000 UTC m=+0.200808864 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, container_name=collectd, build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container) Oct 5 05:01:47 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:01:47 localhost podman[104588]: 2025-10-05 09:01:47.063151689 +0000 UTC m=+0.213224473 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, config_id=tripleo_step3, name=rhosp17/openstack-collectd, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:01:47 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:01:47 localhost podman[104574]: 2025-10-05 09:01:47.033095108 +0000 UTC m=+0.195410977 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller) Oct 5 05:01:47 localhost podman[104602]: 2025-10-05 09:01:47.107349328 +0000 UTC m=+0.252784915 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git) Oct 5 05:01:47 localhost podman[104574]: 2025-10-05 09:01:47.113187488 +0000 UTC m=+0.275503327 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, distribution-scope=public, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, release=1, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Oct 5 05:01:47 localhost podman[104574]: unhealthy Oct 5 05:01:47 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:01:47 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:01:47 localhost podman[104602]: 2025-10-05 09:01:47.13408929 +0000 UTC m=+0.279524907 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Oct 5 05:01:47 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 05:01:47 localhost podman[104589]: 2025-10-05 09:01:47.188606541 +0000 UTC m=+0.335699854 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible) Oct 5 05:01:47 localhost podman[104576]: 2025-10-05 09:01:47.216446592 +0000 UTC m=+0.373361663 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, vcs-type=git, architecture=x86_64, container_name=iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 05:01:47 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:01:47 localhost podman[104589]: 2025-10-05 09:01:47.226205788 +0000 UTC m=+0.373299121 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Oct 5 05:01:47 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:01:47 localhost podman[104582]: 2025-10-05 09:01:47.293009516 +0000 UTC m=+0.445395564 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, container_name=nova_migration_target, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, architecture=x86_64) Oct 5 05:01:47 localhost podman[104582]: 2025-10-05 09:01:47.637296103 +0000 UTC m=+0.789682201 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, container_name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:01:47 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:01:47 localhost systemd[1]: tmp-crun.yM7iIf.mount: Deactivated successfully. Oct 5 05:01:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:01:55 localhost podman[104748]: 2025-10-05 09:01:55.916577276 +0000 UTC m=+0.080538795 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, tcib_managed=true, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 05:01:55 localhost podman[104748]: 2025-10-05 09:01:55.97121475 +0000 UTC m=+0.135176289 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_id=tripleo_step5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=nova_compute, build-date=2025-07-21T14:48:37, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:01:55 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 05:02:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:02:06 localhost systemd[1]: tmp-crun.oRxDeE.mount: Deactivated successfully. Oct 5 05:02:06 localhost podman[104774]: 2025-10-05 09:02:06.906204274 +0000 UTC m=+0.075151977 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:02:07 localhost podman[104774]: 2025-10-05 09:02:07.103083679 +0000 UTC m=+0.272031342 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container) Oct 5 05:02:07 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:02:17 localhost systemd[1]: tmp-crun.uYCtH4.mount: Deactivated successfully. Oct 5 05:02:17 localhost podman[104802]: 2025-10-05 09:02:17.941434839 +0000 UTC m=+0.104593843 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 05:02:17 localhost podman[104802]: 2025-10-05 09:02:17.978121472 +0000 UTC m=+0.141280476 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, distribution-scope=public, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, release=1, io.openshift.expose-services=) Oct 5 05:02:17 localhost podman[104802]: unhealthy Oct 5 05:02:17 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:17 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:02:17 localhost podman[104826]: 2025-10-05 09:02:17.993252866 +0000 UTC m=+0.136675000 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, release=1, com.redhat.component=openstack-cron-container) Oct 5 05:02:18 localhost podman[104826]: 2025-10-05 09:02:18.025975201 +0000 UTC m=+0.169397365 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9) Oct 5 05:02:18 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[104801]: 2025-10-05 09:02:18.046293306 +0000 UTC m=+0.210829597 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 05:02:18 localhost podman[104801]: 2025-10-05 09:02:18.088233194 +0000 UTC m=+0.252769465 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:02:18 localhost podman[104801]: unhealthy Oct 5 05:02:18 localhost podman[104832]: 2025-10-05 09:02:18.096924681 +0000 UTC m=+0.239808330 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:29:47, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 05:02:18 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:18 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:02:18 localhost podman[104803]: 2025-10-05 09:02:18.15171635 +0000 UTC m=+0.309491997 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 05:02:18 localhost podman[104804]: 2025-10-05 09:02:18.194939382 +0000 UTC m=+0.350571230 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., container_name=iscsid, distribution-scope=public, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 05:02:18 localhost podman[104832]: 2025-10-05 09:02:18.252585809 +0000 UTC m=+0.395469398 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-07-21T15:29:47, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-type=git, architecture=x86_64) Oct 5 05:02:18 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[104816]: 2025-10-05 09:02:18.262711856 +0000 UTC m=+0.407960299 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.component=openstack-collectd-container, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:02:18 localhost podman[104816]: 2025-10-05 09:02:18.296280944 +0000 UTC m=+0.441529327 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, config_id=tripleo_step3, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, release=2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, container_name=collectd) Oct 5 05:02:18 localhost podman[104804]: 2025-10-05 09:02:18.302910296 +0000 UTC m=+0.458542184 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, version=17.1.9, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, release=1, tcib_managed=true, vcs-type=git) Oct 5 05:02:18 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:02:18 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[104803]: 2025-10-05 09:02:18.330994674 +0000 UTC m=+0.488770241 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, build-date=2025-07-21T14:45:33, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 05:02:18 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:02:18 localhost podman[104815]: 2025-10-05 09:02:18.30711086 +0000 UTC m=+0.460116796 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc.) Oct 5 05:02:18 localhost podman[104815]: 2025-10-05 09:02:18.675122327 +0000 UTC m=+0.828128193 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=) Oct 5 05:02:18 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:02:18 localhost systemd[1]: tmp-crun.dRvPX2.mount: Deactivated successfully. Oct 5 05:02:21 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:02:21 localhost recover_tripleo_nova_virtqemud[104975]: 63458 Oct 5 05:02:21 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:02:21 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:02:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:02:26 localhost podman[104976]: 2025-10-05 09:02:26.89960887 +0000 UTC m=+0.068190707 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_id=tripleo_step5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9) Oct 5 05:02:26 localhost podman[104976]: 2025-10-05 09:02:26.932401196 +0000 UTC m=+0.100983053 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 05:02:26 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Deactivated successfully. Oct 5 05:02:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:02:37 localhost podman[105077]: 2025-10-05 09:02:37.923416793 +0000 UTC m=+0.090203389 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-qdrouterd-container) Oct 5 05:02:38 localhost podman[105077]: 2025-10-05 09:02:38.147323887 +0000 UTC m=+0.314110473 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:07:59, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Oct 5 05:02:38 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:02:48 localhost systemd[1]: tmp-crun.6WWjfP.mount: Deactivated successfully. Oct 5 05:02:49 localhost podman[105137]: 2025-10-05 09:02:49.007446232 +0000 UTC m=+0.141182012 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, build-date=2025-07-21T15:29:47) Oct 5 05:02:49 localhost podman[105137]: 2025-10-05 09:02:49.030652028 +0000 UTC m=+0.164387818 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Oct 5 05:02:49 localhost podman[105137]: unhealthy Oct 5 05:02:49 localhost podman[105109]: 2025-10-05 09:02:48.937459458 +0000 UTC m=+0.090857146 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 05:02:49 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:49 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 05:02:49 localhost podman[105106]: 2025-10-05 09:02:48.993273755 +0000 UTC m=+0.155244817 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team) Oct 5 05:02:49 localhost podman[105131]: 2025-10-05 09:02:48.959864071 +0000 UTC m=+0.099913953 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, architecture=x86_64, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, container_name=logrotate_crond) Oct 5 05:02:49 localhost podman[105109]: 2025-10-05 09:02:49.068153643 +0000 UTC m=+0.221551321 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Oct 5 05:02:49 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:02:49 localhost podman[105131]: 2025-10-05 09:02:49.091110041 +0000 UTC m=+0.231159933 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, container_name=logrotate_crond, maintainer=OpenStack TripleO Team) Oct 5 05:02:49 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:02:49 localhost podman[105106]: 2025-10-05 09:02:49.122747366 +0000 UTC m=+0.284718428 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:02:49 localhost podman[105107]: 2025-10-05 09:02:49.146328262 +0000 UTC m=+0.306474585 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:02:49 localhost podman[105107]: 2025-10-05 09:02:49.158479774 +0000 UTC m=+0.318626077 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1, architecture=x86_64, vcs-type=git, build-date=2025-07-21T13:28:44, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12) Oct 5 05:02:49 localhost podman[105107]: unhealthy Oct 5 05:02:49 localhost podman[105119]: 2025-10-05 09:02:49.116319721 +0000 UTC m=+0.257233548 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, release=1, container_name=nova_migration_target) Oct 5 05:02:49 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:49 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:02:49 localhost podman[105106]: unhealthy Oct 5 05:02:49 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:49 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:02:49 localhost podman[105125]: 2025-10-05 09:02:49.254743837 +0000 UTC m=+0.400453464 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, distribution-scope=public, version=17.1.9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=2, maintainer=OpenStack TripleO Team) Oct 5 05:02:49 localhost podman[105125]: 2025-10-05 09:02:49.264153694 +0000 UTC m=+0.409863351 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=2, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:02:49 localhost podman[105108]: 2025-10-05 09:02:49.168151309 +0000 UTC m=+0.324215300 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, vendor=Red Hat, Inc.) Oct 5 05:02:49 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:02:49 localhost podman[105108]: 2025-10-05 09:02:49.298239887 +0000 UTC m=+0.454303868 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-07-21T14:45:33, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, release=1) Oct 5 05:02:49 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:02:49 localhost podman[105119]: 2025-10-05 09:02:49.512915649 +0000 UTC m=+0.653829406 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, container_name=nova_migration_target, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1) Oct 5 05:02:49 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:02:49 localhost systemd[1]: tmp-crun.t0TTal.mount: Deactivated successfully. Oct 5 05:02:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:02:57 localhost podman[105276]: 2025-10-05 09:02:57.904189734 +0000 UTC m=+0.075128366 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:02:57 localhost podman[105276]: 2025-10-05 09:02:57.954350406 +0000 UTC m=+0.125289058 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9) Oct 5 05:02:57 localhost podman[105276]: unhealthy Oct 5 05:02:57 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:02:57 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:03:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:03:08 localhost podman[105298]: 2025-10-05 09:03:08.918605888 +0000 UTC m=+0.087118434 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:59, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.expose-services=, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-qdrouterd, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:03:09 localhost podman[105298]: 2025-10-05 09:03:09.116470471 +0000 UTC m=+0.284983027 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:59, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Oct 5 05:03:09 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:03:19 localhost systemd[1]: tmp-crun.yGdahG.mount: Deactivated successfully. Oct 5 05:03:19 localhost podman[105329]: 2025-10-05 09:03:19.937841774 +0000 UTC m=+0.101286311 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ovn-controller, tcib_managed=true) Oct 5 05:03:19 localhost podman[105330]: 2025-10-05 09:03:19.99762193 +0000 UTC m=+0.151352871 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, version=17.1.9, name=rhosp17/openstack-ceilometer-compute, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T14:45:33) Oct 5 05:03:20 localhost podman[105328]: 2025-10-05 09:03:19.955407285 +0000 UTC m=+0.117799183 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, tcib_managed=true) Oct 5 05:03:20 localhost podman[105330]: 2025-10-05 09:03:20.013904865 +0000 UTC m=+0.167635916 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute) Oct 5 05:03:20 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:03:20 localhost podman[105328]: 2025-10-05 09:03:20.038172069 +0000 UTC m=+0.200563937 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Oct 5 05:03:20 localhost podman[105328]: unhealthy Oct 5 05:03:20 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:20 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:03:20 localhost podman[105336]: 2025-10-05 09:03:20.056707636 +0000 UTC m=+0.207758154 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3) Oct 5 05:03:20 localhost podman[105336]: 2025-10-05 09:03:20.069141436 +0000 UTC m=+0.220192014 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:03:20 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:03:20 localhost podman[105360]: 2025-10-05 09:03:19.981574271 +0000 UTC m=+0.117889096 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, release=1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-07-21T15:29:47, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Oct 5 05:03:20 localhost podman[105355]: 2025-10-05 09:03:20.110107007 +0000 UTC m=+0.248997552 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, version=17.1.9, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-cron, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12) Oct 5 05:03:20 localhost podman[105342]: 2025-10-05 09:03:20.141934017 +0000 UTC m=+0.286667052 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.9, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12) Oct 5 05:03:20 localhost podman[105360]: 2025-10-05 09:03:20.161478681 +0000 UTC m=+0.297793496 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-ipmi, release=1, tcib_managed=true, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible) Oct 5 05:03:20 localhost podman[105360]: unhealthy Oct 5 05:03:20 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:20 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 05:03:20 localhost podman[105355]: 2025-10-05 09:03:20.173033687 +0000 UTC m=+0.311924222 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, release=1) Oct 5 05:03:20 localhost podman[105329]: 2025-10-05 09:03:20.174160919 +0000 UTC m=+0.337605536 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.9) Oct 5 05:03:20 localhost podman[105329]: unhealthy Oct 5 05:03:20 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:20 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:03:20 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:03:20 localhost podman[105344]: 2025-10-05 09:03:20.31533151 +0000 UTC m=+0.458287536 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:04:03) Oct 5 05:03:20 localhost podman[105344]: 2025-10-05 09:03:20.329013234 +0000 UTC m=+0.471969200 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:04:03, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:03:20 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:03:20 localhost podman[105342]: 2025-10-05 09:03:20.508164444 +0000 UTC m=+0.652897499 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.) Oct 5 05:03:20 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:03:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:03:28 localhost systemd[1]: tmp-crun.2PiwcX.mount: Deactivated successfully. Oct 5 05:03:28 localhost podman[105503]: 2025-10-05 09:03:28.925318668 +0000 UTC m=+0.089940362 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git) Oct 5 05:03:28 localhost podman[105503]: 2025-10-05 09:03:28.976121407 +0000 UTC m=+0.140743061 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9) Oct 5 05:03:28 localhost podman[105503]: unhealthy Oct 5 05:03:28 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:28 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:03:39 localhost sshd[105602]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:03:39 localhost systemd-logind[760]: New session 36 of user zuul. Oct 5 05:03:39 localhost systemd[1]: Started Session 36 of User zuul. Oct 5 05:03:39 localhost podman[105604]: 2025-10-05 09:03:39.259853766 +0000 UTC m=+0.091243097 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, distribution-scope=public, release=1, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 05:03:39 localhost podman[105604]: 2025-10-05 09:03:39.529994654 +0000 UTC m=+0.361383925 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-07-21T13:07:59, release=1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, version=17.1.9, container_name=metrics_qdr, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Oct 5 05:03:39 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:03:40 localhost python3.9[105727]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:03:40 localhost python3.9[105821]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:03:41 localhost python3.9[105914]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:03:42 localhost python3.9[106008]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:03:43 localhost python3.9[106101]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:03:43 localhost python3.9[106192]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Oct 5 05:03:45 localhost python3.9[106282]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:03:46 localhost python3.9[106374]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Oct 5 05:03:47 localhost python3.9[106464]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:03:47 localhost python3.9[106512]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:03:48 localhost systemd[1]: session-36.scope: Deactivated successfully. Oct 5 05:03:48 localhost systemd[1]: session-36.scope: Consumed 5.016s CPU time. Oct 5 05:03:48 localhost systemd-logind[760]: Session 36 logged out. Waiting for processes to exit. Oct 5 05:03:48 localhost systemd-logind[760]: Removed session 36. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:03:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:03:50 localhost podman[106543]: 2025-10-05 09:03:50.952720959 +0000 UTC m=+0.105621990 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:04:03, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.33.12, name=rhosp17/openstack-collectd, architecture=x86_64) Oct 5 05:03:50 localhost podman[106543]: 2025-10-05 09:03:50.986539314 +0000 UTC m=+0.139440345 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, distribution-scope=public, io.buildah.version=1.33.12, release=2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:03:50 localhost podman[106551]: 2025-10-05 09:03:50.9941155 +0000 UTC m=+0.140826532 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-07-21T15:29:47, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, release=1, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12) Oct 5 05:03:50 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:03:51 localhost podman[106528]: 2025-10-05 09:03:50.981331201 +0000 UTC m=+0.145134211 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 05:03:51 localhost podman[106529]: 2025-10-05 09:03:51.038955957 +0000 UTC m=+0.201657517 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, release=1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller) Oct 5 05:03:51 localhost podman[106529]: 2025-10-05 09:03:51.04636523 +0000 UTC m=+0.209066820 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, container_name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, architecture=x86_64) Oct 5 05:03:51 localhost podman[106529]: unhealthy Oct 5 05:03:51 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:51 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:03:51 localhost podman[106528]: 2025-10-05 09:03:51.062879742 +0000 UTC m=+0.226682742 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:03:51 localhost podman[106551]: 2025-10-05 09:03:51.089677175 +0000 UTC m=+0.236388247 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.9, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true) Oct 5 05:03:51 localhost podman[106551]: unhealthy Oct 5 05:03:51 localhost podman[106550]: 2025-10-05 09:03:51.09644517 +0000 UTC m=+0.244143659 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron) Oct 5 05:03:51 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:51 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 05:03:51 localhost podman[106528]: unhealthy Oct 5 05:03:51 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:51 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:03:51 localhost podman[106530]: 2025-10-05 09:03:51.164583033 +0000 UTC m=+0.325269818 container health_status 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:45:33, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ceilometer_agent_compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 05:03:51 localhost podman[106550]: 2025-10-05 09:03:51.183280745 +0000 UTC m=+0.330979234 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, name=rhosp17/openstack-cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1) Oct 5 05:03:51 localhost podman[106530]: 2025-10-05 09:03:51.194127182 +0000 UTC m=+0.354813977 container exec_died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:45:33, container_name=ceilometer_agent_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 05:03:51 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:03:51 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Deactivated successfully. Oct 5 05:03:51 localhost podman[106531]: 2025-10-05 09:03:51.041686372 +0000 UTC m=+0.189639518 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, container_name=iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:03:51 localhost podman[106532]: 2025-10-05 09:03:51.248372245 +0000 UTC m=+0.401258736 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, release=1, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 05:03:51 localhost podman[106531]: 2025-10-05 09:03:51.275150888 +0000 UTC m=+0.423104054 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, release=1) Oct 5 05:03:51 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:03:51 localhost podman[106532]: 2025-10-05 09:03:51.617225245 +0000 UTC m=+0.770111776 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, container_name=nova_migration_target, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Oct 5 05:03:51 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:03:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52024 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=891293659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749819D0000000001030307) Oct 5 05:03:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2625 DF PROTO=TCP SPT=49104 DPT=9882 SEQ=290170678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749858B0000000001030307) Oct 5 05:03:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52025 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=891293659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74985B60000000001030307) Oct 5 05:03:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2626 DF PROTO=TCP SPT=49104 DPT=9882 SEQ=290170678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74989760000000001030307) Oct 5 05:03:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52026 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=891293659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7498DB60000000001030307) Oct 5 05:03:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2627 DF PROTO=TCP SPT=49104 DPT=9882 SEQ=290170678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74991760000000001030307) Oct 5 05:03:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:03:59 localhost podman[106704]: 2025-10-05 09:03:59.913246404 +0000 UTC m=+0.076776591 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:03:59 localhost podman[106704]: 2025-10-05 09:03:59.935197835 +0000 UTC m=+0.098727972 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, container_name=nova_compute, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:03:59 localhost podman[106704]: unhealthy Oct 5 05:03:59 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:03:59 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:04:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40304 DF PROTO=TCP SPT=49050 DPT=9102 SEQ=63213077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74998C50000000001030307) Oct 5 05:04:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40305 DF PROTO=TCP SPT=49050 DPT=9102 SEQ=63213077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7499CB60000000001030307) Oct 5 05:04:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52027 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=891293659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7499D760000000001030307) Oct 5 05:04:03 localhost sshd[106726]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:04:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2628 DF PROTO=TCP SPT=49104 DPT=9882 SEQ=290170678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749A1360000000001030307) Oct 5 05:04:03 localhost systemd-logind[760]: New session 37 of user zuul. Oct 5 05:04:03 localhost systemd[1]: Started Session 37 of User zuul. Oct 5 05:04:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40306 DF PROTO=TCP SPT=49050 DPT=9102 SEQ=63213077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749A4B60000000001030307) Oct 5 05:04:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3489 DF PROTO=TCP SPT=36052 DPT=9101 SEQ=743442854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749A4D20000000001030307) Oct 5 05:04:04 localhost python3.9[106821]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:04:04 localhost systemd[1]: Reloading. Oct 5 05:04:04 localhost systemd-rc-local-generator[106842]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:04:04 localhost systemd-sysv-generator[106848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:04:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3490 DF PROTO=TCP SPT=36052 DPT=9101 SEQ=743442854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749A8F70000000001030307) Oct 5 05:04:05 localhost python3.9[106947]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:04:05 localhost network[106964]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:04:05 localhost network[106965]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:04:05 localhost network[106966]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:04:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3491 DF PROTO=TCP SPT=36052 DPT=9101 SEQ=743442854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749B0F60000000001030307) Oct 5 05:04:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40307 DF PROTO=TCP SPT=49050 DPT=9102 SEQ=63213077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749B4770000000001030307) Oct 5 05:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:04:09 localhost podman[107089]: 2025-10-05 09:04:09.913429509 +0000 UTC m=+0.083481965 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, container_name=metrics_qdr, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git) Oct 5 05:04:10 localhost podman[107089]: 2025-10-05 09:04:10.116113633 +0000 UTC m=+0.286166089 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, config_id=tripleo_step1, vcs-type=git, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-qdrouterd, version=17.1.9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container) Oct 5 05:04:10 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:04:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3492 DF PROTO=TCP SPT=36052 DPT=9101 SEQ=743442854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749C0B60000000001030307) Oct 5 05:04:11 localhost python3.9[107194]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:04:11 localhost network[107211]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:04:11 localhost network[107212]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:04:11 localhost network[107213]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:04:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:15 localhost python3.9[107413]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:04:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:04:15 localhost recover_tripleo_nova_virtqemud[107416]: 63458 Oct 5 05:04:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:04:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:04:15 localhost systemd[1]: Reloading. Oct 5 05:04:16 localhost systemd-rc-local-generator[107442]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:04:16 localhost systemd-sysv-generator[107445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:04:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:16 localhost systemd[1]: Stopping ceilometer_agent_compute container... Oct 5 05:04:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7525 DF PROTO=TCP SPT=45568 DPT=9105 SEQ=1459467101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749D67A0000000001030307) Oct 5 05:04:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7526 DF PROTO=TCP SPT=45568 DPT=9105 SEQ=1459467101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749DA770000000001030307) Oct 5 05:04:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7527 DF PROTO=TCP SPT=45568 DPT=9105 SEQ=1459467101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749E2770000000001030307) Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:04:21 localhost podman[107476]: Error: container 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 is not running Oct 5 05:04:21 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Main process exited, code=exited, status=125/n/a Oct 5 05:04:21 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost systemd[1]: tmp-crun.kGreRR.mount: Deactivated successfully. Oct 5 05:04:21 localhost podman[107474]: 2025-10-05 09:04:21.474866367 +0000 UTC m=+0.139580119 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 05:04:21 localhost podman[107475]: 2025-10-05 09:04:21.483639616 +0000 UTC m=+0.147296969 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, release=1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Oct 5 05:04:21 localhost podman[107480]: 2025-10-05 09:04:21.549452837 +0000 UTC m=+0.203067296 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 05:04:21 localhost podman[107478]: 2025-10-05 09:04:21.449106982 +0000 UTC m=+0.105394674 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, release=2, container_name=collectd, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:04:21 localhost podman[107474]: 2025-10-05 09:04:21.564140989 +0000 UTC m=+0.228854721 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:04:21 localhost podman[107480]: 2025-10-05 09:04:21.580150817 +0000 UTC m=+0.233765256 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12) Oct 5 05:04:21 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:04:21 localhost podman[107491]: 2025-10-05 09:04:21.593850421 +0000 UTC m=+0.243854531 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi) Oct 5 05:04:21 localhost podman[107474]: unhealthy Oct 5 05:04:21 localhost podman[107475]: 2025-10-05 09:04:21.622182727 +0000 UTC m=+0.285840140 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, version=17.1.9, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44) Oct 5 05:04:21 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:21 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost podman[107475]: unhealthy Oct 5 05:04:21 localhost podman[107478]: 2025-10-05 09:04:21.630317108 +0000 UTC m=+0.286604810 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, release=2, vendor=Red Hat, Inc., build-date=2025-07-21T13:04:03, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:04:21 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:21 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:04:21 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:04:21 localhost podman[107491]: 2025-10-05 09:04:21.669419408 +0000 UTC m=+0.319423528 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, build-date=2025-07-21T15:29:47, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, release=1) Oct 5 05:04:21 localhost podman[107491]: unhealthy Oct 5 05:04:21 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:21 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 05:04:21 localhost podman[107477]: 2025-10-05 09:04:21.518394708 +0000 UTC m=+0.181487306 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 05:04:21 localhost podman[107614]: 2025-10-05 09:04:21.732170185 +0000 UTC m=+0.075482016 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 05:04:21 localhost podman[107477]: 2025-10-05 09:04:21.753304972 +0000 UTC m=+0.416397620 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15) Oct 5 05:04:21 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:04:22 localhost podman[107614]: 2025-10-05 09:04:22.112223691 +0000 UTC m=+0.455535472 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12) Oct 5 05:04:22 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:04:22 localhost systemd[1]: tmp-crun.w7ql3M.mount: Deactivated successfully. Oct 5 05:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7528 DF PROTO=TCP SPT=45568 DPT=9105 SEQ=1459467101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749F2360000000001030307) Oct 5 05:04:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10720 DF PROTO=TCP SPT=46234 DPT=9100 SEQ=3210615830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749F6CE0000000001030307) Oct 5 05:04:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10721 DF PROTO=TCP SPT=46234 DPT=9100 SEQ=3210615830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749FAB60000000001030307) Oct 5 05:04:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58336 DF PROTO=TCP SPT=37370 DPT=9882 SEQ=4054593849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749FABC0000000001030307) Oct 5 05:04:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58337 DF PROTO=TCP SPT=37370 DPT=9882 SEQ=4054593849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC749FEB70000000001030307) Oct 5 05:04:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10722 DF PROTO=TCP SPT=46234 DPT=9100 SEQ=3210615830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A02B70000000001030307) Oct 5 05:04:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58338 DF PROTO=TCP SPT=37370 DPT=9882 SEQ=4054593849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A06B70000000001030307) Oct 5 05:04:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:04:30 localhost podman[107635]: 2025-10-05 09:04:30.419059596 +0000 UTC m=+0.083962028 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, vcs-type=git, config_id=tripleo_step5, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:04:30 localhost podman[107635]: 2025-10-05 09:04:30.437618233 +0000 UTC m=+0.102520645 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 05:04:30 localhost podman[107635]: unhealthy Oct 5 05:04:30 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:30 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:04:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31953 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=1093602237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A11F60000000001030307) Oct 5 05:04:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63596 DF PROTO=TCP SPT=36286 DPT=9101 SEQ=3266319749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A1A030000000001030307) Oct 5 05:04:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63598 DF PROTO=TCP SPT=36286 DPT=9101 SEQ=3266319749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A25F60000000001030307) Oct 5 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:04:40 localhost systemd[1]: tmp-crun.i6d0B4.mount: Deactivated successfully. Oct 5 05:04:40 localhost podman[107733]: 2025-10-05 09:04:40.395724656 +0000 UTC m=+0.068465134 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, build-date=2025-07-21T13:07:59) Oct 5 05:04:40 localhost podman[107733]: 2025-10-05 09:04:40.588756396 +0000 UTC m=+0.261496864 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1) Oct 5 05:04:40 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:04:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63599 DF PROTO=TCP SPT=36286 DPT=9101 SEQ=3266319749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A35B70000000001030307) Oct 5 05:04:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10676 DF PROTO=TCP SPT=57230 DPT=9105 SEQ=2886337764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A4BAA0000000001030307) Oct 5 05:04:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10677 DF PROTO=TCP SPT=57230 DPT=9105 SEQ=2886337764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A4FB60000000001030307) Oct 5 05:04:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10678 DF PROTO=TCP SPT=57230 DPT=9105 SEQ=2886337764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A57B60000000001030307) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:04:51 localhost podman[107762]: Error: container 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 is not running Oct 5 05:04:51 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Main process exited, code=exited, status=125/n/a Oct 5 05:04:51 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Failed with result 'exit-code'. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:04:51 localhost podman[107774]: 2025-10-05 09:04:51.764743528 +0000 UTC m=+0.092942513 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64) Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:04:51 localhost podman[107774]: 2025-10-05 09:04:51.788129479 +0000 UTC m=+0.116328494 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53) Oct 5 05:04:51 localhost podman[107774]: unhealthy Oct 5 05:04:51 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:51 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:04:51 localhost podman[107775]: 2025-10-05 09:04:51.813734929 +0000 UTC m=+0.137702658 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9) Oct 5 05:04:51 localhost podman[107782]: 2025-10-05 09:04:51.868114926 +0000 UTC m=+0.183692485 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-07-21T13:04:03, io.openshift.expose-services=, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, vendor=Red Hat, Inc., distribution-scope=public, release=2, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:04:51 localhost podman[107824]: 2025-10-05 09:04:51.925935788 +0000 UTC m=+0.129339889 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15) Oct 5 05:04:51 localhost podman[107775]: 2025-10-05 09:04:51.936874877 +0000 UTC m=+0.260842606 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container) Oct 5 05:04:51 localhost podman[107775]: unhealthy Oct 5 05:04:51 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:51 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:04:51 localhost podman[107824]: 2025-10-05 09:04:51.971317699 +0000 UTC m=+0.174721790 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git) Oct 5 05:04:51 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:04:52 localhost podman[107782]: 2025-10-05 09:04:52.005200206 +0000 UTC m=+0.320777745 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=2, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03) Oct 5 05:04:52 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:04:52 localhost podman[107823]: 2025-10-05 09:04:52.048935812 +0000 UTC m=+0.256319252 container health_status aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, release=1) Oct 5 05:04:52 localhost podman[107776]: 2025-10-05 09:04:52.023725493 +0000 UTC m=+0.345249395 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4) Oct 5 05:04:52 localhost podman[107823]: 2025-10-05 09:04:52.10222039 +0000 UTC m=+0.309603820 container exec_died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:29:47, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, distribution-scope=public, vcs-type=git, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, architecture=x86_64, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Oct 5 05:04:52 localhost podman[107823]: unhealthy Oct 5 05:04:52 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:04:52 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 05:04:52 localhost podman[107776]: 2025-10-05 09:04:52.157448631 +0000 UTC m=+0.478972523 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, container_name=logrotate_crond, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.9, vcs-type=git) Oct 5 05:04:52 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:04:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:04:52 localhost podman[107897]: 2025-10-05 09:04:52.271217402 +0000 UTC m=+0.075172737 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Oct 5 05:04:52 localhost podman[107897]: 2025-10-05 09:04:52.676114248 +0000 UTC m=+0.480069563 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Oct 5 05:04:52 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:04:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10679 DF PROTO=TCP SPT=57230 DPT=9105 SEQ=2886337764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A67760000000001030307) Oct 5 05:04:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48219 DF PROTO=TCP SPT=32862 DPT=9882 SEQ=1308397419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A6FEC0000000001030307) Oct 5 05:04:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42570 DF PROTO=TCP SPT=54970 DPT=9100 SEQ=3066607644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A77F70000000001030307) Oct 5 05:04:58 localhost podman[107457]: time="2025-10-05T09:04:58Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Oct 5 05:04:58 localhost systemd[1]: tmp-crun.0kJpu6.mount: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: libpod-528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.scope: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: libpod-528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.scope: Consumed 5.388s CPU time. Oct 5 05:04:58 localhost podman[107457]: 2025-10-05 09:04:58.391715806 +0000 UTC m=+42.086791101 container stop 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-07-21T14:45:33, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1) Oct 5 05:04:58 localhost podman[107457]: 2025-10-05 09:04:58.425980083 +0000 UTC m=+42.121055408 container died 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, container_name=ceilometer_agent_compute, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4) Oct 5 05:04:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.timer: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948. Oct 5 05:04:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Failed to open /run/systemd/transient/528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: No such file or directory Oct 5 05:04:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948-userdata-shm.mount: Deactivated successfully. Oct 5 05:04:58 localhost podman[107457]: 2025-10-05 09:04:58.477256006 +0000 UTC m=+42.172331291 container cleanup 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T14:45:33, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1) Oct 5 05:04:58 localhost podman[107457]: ceilometer_agent_compute Oct 5 05:04:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.timer: Failed to open /run/systemd/transient/528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.timer: No such file or directory Oct 5 05:04:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Failed to open /run/systemd/transient/528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: No such file or directory Oct 5 05:04:58 localhost podman[107922]: 2025-10-05 09:04:58.530372819 +0000 UTC m=+0.119947092 container cleanup 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, release=1, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1) Oct 5 05:04:58 localhost systemd[1]: libpod-conmon-528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.scope: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.timer: Failed to open /run/systemd/transient/528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.timer: No such file or directory Oct 5 05:04:58 localhost systemd[1]: 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: Failed to open /run/systemd/transient/528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948.service: No such file or directory Oct 5 05:04:58 localhost podman[107936]: 2025-10-05 09:04:58.617600905 +0000 UTC m=+0.060131896 container cleanup 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-07-21T14:45:33) Oct 5 05:04:58 localhost podman[107936]: ceilometer_agent_compute Oct 5 05:04:58 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Oct 5 05:04:58 localhost systemd[1]: Stopped ceilometer_agent_compute container. Oct 5 05:04:58 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.045s CPU time, no IO. Oct 5 05:04:59 localhost python3.9[108040]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:04:59 localhost systemd[1]: Reloading. Oct 5 05:04:59 localhost systemd-sysv-generator[108069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:04:59 localhost systemd-rc-local-generator[108064]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:04:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:04:59 localhost systemd[1]: var-lib-containers-storage-overlay-9bbb5021f030002945ce2fa60285c1586ad519c4dce8fbf294a1ab2597d3a339-merged.mount: Deactivated successfully. Oct 5 05:04:59 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Oct 5 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:05:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:05:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:05:00 localhost systemd[1]: tmp-crun.cqYFl0.mount: Deactivated successfully. Oct 5 05:05:00 localhost podman[108096]: 2025-10-05 09:05:00.917031562 +0000 UTC m=+0.085637714 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, architecture=x86_64, config_id=tripleo_step5) Oct 5 05:05:00 localhost podman[108096]: 2025-10-05 09:05:00.939345751 +0000 UTC m=+0.107951893 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1) Oct 5 05:05:00 localhost podman[108096]: unhealthy Oct 5 05:05:00 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:00 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:05:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:3f:b5:ce MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=50498 SEQ=25424623 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Oct 5 05:05:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50681 DF PROTO=TCP SPT=56084 DPT=9101 SEQ=3775655337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A8F330000000001030307) Oct 5 05:05:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:05:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5661 writes, 24K keys, 5661 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5661 writes, 723 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8 writes, 16 keys, 8 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 8 writes, 4 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:05:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50683 DF PROTO=TCP SPT=56084 DPT=9101 SEQ=3775655337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74A9B360000000001030307) Oct 5 05:05:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:05:10 localhost podman[108118]: 2025-10-05 09:05:10.936687086 +0000 UTC m=+0.104472068 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Oct 5 05:05:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50684 DF PROTO=TCP SPT=56084 DPT=9101 SEQ=3775655337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74AAAF70000000001030307) Oct 5 05:05:11 localhost podman[108118]: 2025-10-05 09:05:11.13315977 +0000 UTC m=+0.300944792 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=metrics_qdr, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, build-date=2025-07-21T13:07:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12) Oct 5 05:05:11 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:05:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45504 DF PROTO=TCP SPT=52946 DPT=9105 SEQ=2368419861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74AC0DA0000000001030307) Oct 5 05:05:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45505 DF PROTO=TCP SPT=52946 DPT=9105 SEQ=2368419861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74AC4F70000000001030307) Oct 5 05:05:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45506 DF PROTO=TCP SPT=52946 DPT=9105 SEQ=2368419861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74ACCF60000000001030307) Oct 5 05:05:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:05:21 localhost podman[108148]: 2025-10-05 09:05:21.909086702 +0000 UTC m=+0.076646807 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:05:21 localhost podman[108148]: 2025-10-05 09:05:21.928133983 +0000 UTC m=+0.095694108 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, release=1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:05:21 localhost podman[108148]: unhealthy Oct 5 05:05:21 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:21 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:05:22 localhost podman[108183]: 2025-10-05 09:05:22.928949648 +0000 UTC m=+0.078392865 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Oct 5 05:05:22 localhost podman[108183]: 2025-10-05 09:05:22.942185421 +0000 UTC m=+0.091628628 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, vcs-type=git, name=rhosp17/openstack-cron, container_name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:05:22 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:05:22 localhost podman[108173]: 2025-10-05 09:05:22.912993752 +0000 UTC m=+0.071492386 container health_status 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T13:04:03, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=2, version=17.1.9, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Oct 5 05:05:22 localhost podman[108173]: 2025-10-05 09:05:22.993186075 +0000 UTC m=+0.151684749 container exec_died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, build-date=2025-07-21T13:04:03, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20250721.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=2, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-collectd-container) Oct 5 05:05:23 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Deactivated successfully. Oct 5 05:05:23 localhost podman[108170]: 2025-10-05 09:05:23.080500864 +0000 UTC m=+0.242452283 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, container_name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.9, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=) Oct 5 05:05:23 localhost podman[108192]: Error: container aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a is not running Oct 5 05:05:23 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Main process exited, code=exited, status=125/n/a Oct 5 05:05:23 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed with result 'exit-code'. Oct 5 05:05:23 localhost podman[108168]: 2025-10-05 09:05:23.180343845 +0000 UTC m=+0.344823773 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 05:05:23 localhost podman[108168]: 2025-10-05 09:05:23.199198771 +0000 UTC m=+0.363678759 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:28:44, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12) Oct 5 05:05:23 localhost podman[108168]: unhealthy Oct 5 05:05:23 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:23 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:05:23 localhost podman[108169]: 2025-10-05 09:05:23.288904765 +0000 UTC m=+0.451559244 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid) Oct 5 05:05:23 localhost podman[108169]: 2025-10-05 09:05:23.297478289 +0000 UTC m=+0.460132768 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1) Oct 5 05:05:23 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:05:23 localhost podman[108170]: 2025-10-05 09:05:23.432901463 +0000 UTC m=+0.594852932 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:05:23 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:05:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45507 DF PROTO=TCP SPT=52946 DPT=9105 SEQ=2368419861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74ADCB60000000001030307) Oct 5 05:05:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52645 DF PROTO=TCP SPT=35632 DPT=9882 SEQ=1925470920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74AE51C0000000001030307) Oct 5 05:05:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37046 DF PROTO=TCP SPT=50068 DPT=9100 SEQ=170514628 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74AED360000000001030307) Oct 5 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:05:31 localhost podman[108281]: 2025-10-05 09:05:31.163442865 +0000 UTC m=+0.082147988 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git) Oct 5 05:05:31 localhost podman[108281]: 2025-10-05 09:05:31.208259351 +0000 UTC m=+0.126964494 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d) Oct 5 05:05:31 localhost podman[108281]: unhealthy Oct 5 05:05:31 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:31 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:05:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41701 DF PROTO=TCP SPT=45232 DPT=9102 SEQ=2057751126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74AFC760000000001030307) Oct 5 05:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2114 DF PROTO=TCP SPT=34652 DPT=9101 SEQ=636618593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B04630000000001030307) Oct 5 05:05:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2116 DF PROTO=TCP SPT=34652 DPT=9101 SEQ=636618593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B10760000000001030307) Oct 5 05:05:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2117 DF PROTO=TCP SPT=34652 DPT=9101 SEQ=636618593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B20360000000001030307) Oct 5 05:05:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:05:41 localhost podman[108379]: 2025-10-05 09:05:41.400867507 +0000 UTC m=+0.073463690 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.33.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, managed_by=tripleo_ansible) Oct 5 05:05:41 localhost podman[108379]: 2025-10-05 09:05:41.580819979 +0000 UTC m=+0.253416172 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1) Oct 5 05:05:41 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:05:41 localhost podman[108081]: time="2025-10-05T09:05:41Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Oct 5 05:05:41 localhost systemd[1]: libpod-aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.scope: Deactivated successfully. Oct 5 05:05:41 localhost systemd[1]: libpod-aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.scope: Consumed 6.122s CPU time. Oct 5 05:05:41 localhost podman[108081]: 2025-10-05 09:05:41.832424941 +0000 UTC m=+42.104924758 container died aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:29:47, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1) Oct 5 05:05:41 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.timer: Deactivated successfully. Oct 5 05:05:41 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a. Oct 5 05:05:41 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed to open /run/systemd/transient/aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: No such file or directory Oct 5 05:05:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a-userdata-shm.mount: Deactivated successfully. Oct 5 05:05:41 localhost podman[108081]: 2025-10-05 09:05:41.882345867 +0000 UTC m=+42.154845674 container cleanup aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, vendor=Red Hat, Inc., release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20250721.1, com.redhat.component=openstack-ceilometer-ipmi-container) Oct 5 05:05:41 localhost podman[108081]: ceilometer_agent_ipmi Oct 5 05:05:41 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.timer: Failed to open /run/systemd/transient/aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.timer: No such file or directory Oct 5 05:05:41 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed to open /run/systemd/transient/aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: No such file or directory Oct 5 05:05:41 localhost podman[108408]: 2025-10-05 09:05:41.909477628 +0000 UTC m=+0.068058632 container cleanup aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, build-date=2025-07-21T15:29:47, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git) Oct 5 05:05:41 localhost systemd[1]: libpod-conmon-aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.scope: Deactivated successfully. Oct 5 05:05:42 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.timer: Failed to open /run/systemd/transient/aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.timer: No such file or directory Oct 5 05:05:42 localhost systemd[1]: aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: Failed to open /run/systemd/transient/aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a.service: No such file or directory Oct 5 05:05:42 localhost podman[108421]: 2025-10-05 09:05:42.00565622 +0000 UTC m=+0.069238415 container cleanup aa566294618f4eac359232895eb54d06a65bac05daa709687a0b1b83c12ca19a (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-ipmi/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:29:47, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, tcib_managed=true, vcs-ref=fb6ae8bb9cf127a94f881f2787c60d4d2018020f, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Oct 5 05:05:42 localhost podman[108421]: ceilometer_agent_ipmi Oct 5 05:05:42 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Oct 5 05:05:42 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Oct 5 05:05:42 localhost systemd[1]: var-lib-containers-storage-overlay-bdd5d7f208e627ed078801541a11c92d30dfbffb1c7200a7e88292fbfc56b82d-merged.mount: Deactivated successfully. Oct 5 05:05:42 localhost python3.9[108523]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:05:42 localhost systemd[1]: Reloading. Oct 5 05:05:42 localhost systemd-rc-local-generator[108551]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:05:42 localhost systemd-sysv-generator[108555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:05:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:05:43 localhost systemd[1]: Stopping collectd container... Oct 5 05:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50272 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=1198505453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B360B0000000001030307) Oct 5 05:05:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50273 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=1198505453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B39F70000000001030307) Oct 5 05:05:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:05:48 localhost recover_tripleo_nova_virtqemud[108577]: 63458 Oct 5 05:05:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:05:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50274 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=1198505453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B41F60000000001030307) Oct 5 05:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:05:52 localhost podman[108578]: 2025-10-05 09:05:52.149203934 +0000 UTC m=+0.068262269 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12) Oct 5 05:05:52 localhost podman[108578]: 2025-10-05 09:05:52.195191532 +0000 UTC m=+0.114249897 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1) Oct 5 05:05:52 localhost podman[108578]: unhealthy Oct 5 05:05:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50275 DF PROTO=TCP SPT=59964 DPT=9105 SEQ=1198505453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B51B60000000001030307) Oct 5 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:05:53 localhost systemd[1]: tmp-crun.1m8kSY.mount: Deactivated successfully. Oct 5 05:05:53 localhost systemd[1]: tmp-crun.Uj46tM.mount: Deactivated successfully. Oct 5 05:05:53 localhost podman[108604]: Error: container 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 is not running Oct 5 05:05:53 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Main process exited, code=exited, status=125/n/a Oct 5 05:05:53 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Failed with result 'exit-code'. Oct 5 05:05:54 localhost podman[108596]: 2025-10-05 09:05:53.926090288 +0000 UTC m=+0.090511997 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245) Oct 5 05:05:54 localhost podman[108597]: 2025-10-05 09:05:54.031219503 +0000 UTC m=+0.191514879 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9) Oct 5 05:05:54 localhost podman[108597]: 2025-10-05 09:05:54.040269441 +0000 UTC m=+0.200564807 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.9, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 05:05:54 localhost podman[108611]: 2025-10-05 09:05:53.949077616 +0000 UTC m=+0.095955235 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c) Oct 5 05:05:54 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:05:54 localhost podman[108598]: 2025-10-05 09:05:54.009253833 +0000 UTC m=+0.161557811 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1) Oct 5 05:05:54 localhost podman[108596]: 2025-10-05 09:05:54.060268788 +0000 UTC m=+0.224690547 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 05:05:54 localhost podman[108611]: 2025-10-05 09:05:54.083234486 +0000 UTC m=+0.230112095 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, container_name=logrotate_crond) Oct 5 05:05:54 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:05:54 localhost podman[108596]: unhealthy Oct 5 05:05:54 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:05:54 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:05:54 localhost podman[108598]: 2025-10-05 09:05:54.376283892 +0000 UTC m=+0.528587880 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:05:54 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:05:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27856 DF PROTO=TCP SPT=33798 DPT=9882 SEQ=3733757040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B5A4B0000000001030307) Oct 5 05:05:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35609 DF PROTO=TCP SPT=59756 DPT=9100 SEQ=3657276153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B62760000000001030307) Oct 5 05:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:06:01 localhost podman[108691]: 2025-10-05 09:06:01.664624358 +0000 UTC m=+0.086491107 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:06:01 localhost podman[108691]: 2025-10-05 09:06:01.682551899 +0000 UTC m=+0.104418678 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1) Oct 5 05:06:01 localhost podman[108691]: unhealthy Oct 5 05:06:01 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:01 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:06:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17800 DF PROTO=TCP SPT=56404 DPT=9102 SEQ=1449756881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B71760000000001030307) Oct 5 05:06:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46765 DF PROTO=TCP SPT=36888 DPT=9101 SEQ=3252677142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B79930000000001030307) Oct 5 05:06:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46767 DF PROTO=TCP SPT=36888 DPT=9101 SEQ=3252677142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B85B60000000001030307) Oct 5 05:06:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46768 DF PROTO=TCP SPT=36888 DPT=9101 SEQ=3252677142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74B95760000000001030307) Oct 5 05:06:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:06:11 localhost podman[108714]: 2025-10-05 09:06:11.924499596 +0000 UTC m=+0.093827408 container health_status 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:59, release=1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.33.12, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20250721.1, distribution-scope=public) Oct 5 05:06:12 localhost podman[108714]: 2025-10-05 09:06:12.151190107 +0000 UTC m=+0.320517839 container exec_died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, batch=17.1_20250721.1, version=17.1.9, container_name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1, tcib_managed=true, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Oct 5 05:06:12 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Deactivated successfully. Oct 5 05:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43848 DF PROTO=TCP SPT=57776 DPT=9105 SEQ=4003314283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BAB3A0000000001030307) Oct 5 05:06:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43849 DF PROTO=TCP SPT=57776 DPT=9105 SEQ=4003314283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BAF360000000001030307) Oct 5 05:06:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43850 DF PROTO=TCP SPT=57776 DPT=9105 SEQ=4003314283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BB7360000000001030307) Oct 5 05:06:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:06:22 localhost podman[108744]: 2025-10-05 09:06:22.656198979 +0000 UTC m=+0.079193428 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git) Oct 5 05:06:22 localhost podman[108744]: 2025-10-05 09:06:22.699220205 +0000 UTC m=+0.122214674 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Oct 5 05:06:22 localhost podman[108744]: unhealthy Oct 5 05:06:22 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:22 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:06:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43851 DF PROTO=TCP SPT=57776 DPT=9105 SEQ=4003314283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BC6F60000000001030307) Oct 5 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:06:24 localhost podman[108764]: 2025-10-05 09:06:24.92182214 +0000 UTC m=+0.085850809 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public) Oct 5 05:06:24 localhost podman[108764]: 2025-10-05 09:06:24.961553987 +0000 UTC m=+0.125582646 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, architecture=x86_64, container_name=ovn_controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44) Oct 5 05:06:24 localhost podman[108764]: unhealthy Oct 5 05:06:24 localhost systemd[1]: tmp-crun.Vk966A.mount: Deactivated successfully. Oct 5 05:06:24 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:24 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:06:24 localhost podman[108765]: 2025-10-05 09:06:24.982247563 +0000 UTC m=+0.141709288 container health_status 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team) Oct 5 05:06:25 localhost podman[108765]: 2025-10-05 09:06:25.015464722 +0000 UTC m=+0.174926417 container exec_died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, architecture=x86_64, com.redhat.component=openstack-iscsid-container) Oct 5 05:06:25 localhost systemd[1]: tmp-crun.ZHevVf.mount: Deactivated successfully. Oct 5 05:06:25 localhost podman[108766]: 2025-10-05 09:06:25.031103649 +0000 UTC m=+0.189377701 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, release=1) Oct 5 05:06:25 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Deactivated successfully. Oct 5 05:06:25 localhost podman[108767]: Error: container 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 is not running Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Main process exited, code=exited, status=125/n/a Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Failed with result 'exit-code'. Oct 5 05:06:25 localhost podman[108773]: 2025-10-05 09:06:25.142440875 +0000 UTC m=+0.292457871 container health_status 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, release=1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron) Oct 5 05:06:25 localhost podman[108773]: 2025-10-05 09:06:25.179183819 +0000 UTC m=+0.329200815 container exec_died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Oct 5 05:06:25 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Deactivated successfully. Oct 5 05:06:25 localhost podman[108564]: time="2025-10-05T09:06:25Z" level=warning msg="StopSignal SIGTERM failed to stop container collectd in 42 seconds, resorting to SIGKILL" Oct 5 05:06:25 localhost podman[108564]: 2025-10-05 09:06:25.322964143 +0000 UTC m=+42.073902003 container stop 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, architecture=x86_64, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T13:04:03, container_name=collectd, distribution-scope=public, batch=17.1_20250721.1, release=2, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.9, tcib_managed=true, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, summary=Red Hat OpenStack Platform 17.1 collectd) Oct 5 05:06:25 localhost systemd[1]: libpod-9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.scope: Deactivated successfully. Oct 5 05:06:25 localhost systemd[1]: libpod-9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.scope: Consumed 1.836s CPU time. Oct 5 05:06:25 localhost podman[108564]: 2025-10-05 09:06:25.32982949 +0000 UTC m=+42.080767360 container died 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, build-date=2025-07-21T13:04:03, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-collectd, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9) Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.timer: Deactivated successfully. Oct 5 05:06:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9. Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Failed to open /run/systemd/transient/9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: No such file or directory Oct 5 05:06:25 localhost podman[108564]: 2025-10-05 09:06:25.375411037 +0000 UTC m=+42.126348857 container cleanup 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-07-21T13:04:03, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, tcib_managed=true, container_name=collectd, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:06:25 localhost podman[108564]: collectd Oct 5 05:06:25 localhost podman[108766]: 2025-10-05 09:06:25.402178489 +0000 UTC m=+0.560452501 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.timer: Failed to open /run/systemd/transient/9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.timer: No such file or directory Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Failed to open /run/systemd/transient/9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: No such file or directory Oct 5 05:06:25 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:06:25 localhost podman[108856]: 2025-10-05 09:06:25.417040996 +0000 UTC m=+0.076195565 container cleanup 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-07-21T13:04:03, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, release=2, batch=17.1_20250721.1, com.redhat.component=openstack-collectd-container, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, vendor=Red Hat, Inc.) Oct 5 05:06:25 localhost systemd[1]: libpod-conmon-9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.scope: Deactivated successfully. Oct 5 05:06:25 localhost podman[108890]: error opening file `/run/crun/9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9/status`: No such file or directory Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.timer: Failed to open /run/systemd/transient/9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.timer: No such file or directory Oct 5 05:06:25 localhost systemd[1]: 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: Failed to open /run/systemd/transient/9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9.service: No such file or directory Oct 5 05:06:25 localhost podman[108877]: 2025-10-05 09:06:25.512795635 +0000 UTC m=+0.070094469 container cleanup 9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2025-07-21T13:04:03, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-collectd/images/17.1.9-2, distribution-scope=public, release=2, vcs-ref=1c67cc222531545f43af554407dce9103c5ddf0b) Oct 5 05:06:25 localhost podman[108877]: collectd Oct 5 05:06:25 localhost systemd[1]: tripleo_collectd.service: Deactivated successfully. Oct 5 05:06:25 localhost systemd[1]: Stopped collectd container. Oct 5 05:06:25 localhost systemd[1]: var-lib-containers-storage-overlay-d2231e879ead43b6a2e73a2aad2fe770af49563937e9adad8ccf7c304d6ac6ec-merged.mount: Deactivated successfully. Oct 5 05:06:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9385f7bd9d8c7b0995c349863b61ca1fc59e4816c9f0917726b8c745f205a2d9-userdata-shm.mount: Deactivated successfully. Oct 5 05:06:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13012 DF PROTO=TCP SPT=39416 DPT=9882 SEQ=2710944945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BCF7B0000000001030307) Oct 5 05:06:26 localhost python3.9[108983]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:26 localhost systemd[1]: Reloading. Oct 5 05:06:26 localhost systemd-rc-local-generator[109007]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:06:26 localhost systemd-sysv-generator[109011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:06:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:06:26 localhost systemd[1]: Stopping iscsid container... Oct 5 05:06:26 localhost systemd[1]: libpod-6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.scope: Deactivated successfully. Oct 5 05:06:26 localhost systemd[1]: libpod-6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.scope: Consumed 1.013s CPU time. Oct 5 05:06:26 localhost podman[109024]: 2025-10-05 09:06:26.759037563 +0000 UTC m=+0.078237931 container died 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:06:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.timer: Deactivated successfully. Oct 5 05:06:26 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097. Oct 5 05:06:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Failed to open /run/systemd/transient/6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: No such file or directory Oct 5 05:06:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097-userdata-shm.mount: Deactivated successfully. Oct 5 05:06:26 localhost podman[109024]: 2025-10-05 09:06:26.809443052 +0000 UTC m=+0.128643410 container cleanup 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Oct 5 05:06:26 localhost podman[109024]: iscsid Oct 5 05:06:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.timer: Failed to open /run/systemd/transient/6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.timer: No such file or directory Oct 5 05:06:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Failed to open /run/systemd/transient/6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: No such file or directory Oct 5 05:06:26 localhost podman[109036]: 2025-10-05 09:06:26.839965186 +0000 UTC m=+0.071335341 container cleanup 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Oct 5 05:06:26 localhost systemd[1]: libpod-conmon-6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.scope: Deactivated successfully. Oct 5 05:06:26 localhost systemd[1]: var-lib-containers-storage-overlay-99b34dfa0926eebd9754e1c29502e939f5774c51688baaa6ab9821bcca9cd3b2-merged.mount: Deactivated successfully. Oct 5 05:06:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.timer: Failed to open /run/systemd/transient/6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.timer: No such file or directory Oct 5 05:06:26 localhost systemd[1]: 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: Failed to open /run/systemd/transient/6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097.service: No such file or directory Oct 5 05:06:26 localhost podman[109048]: 2025-10-05 09:06:26.939228912 +0000 UTC m=+0.067864408 container cleanup 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2) Oct 5 05:06:26 localhost podman[109048]: iscsid Oct 5 05:06:26 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Oct 5 05:06:26 localhost systemd[1]: Stopped iscsid container. Oct 5 05:06:27 localhost python3.9[109152]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:27 localhost systemd[1]: Reloading. Oct 5 05:06:27 localhost systemd-sysv-generator[109180]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:06:27 localhost systemd-rc-local-generator[109176]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:06:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:06:28 localhost systemd[1]: Stopping logrotate_crond container... Oct 5 05:06:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54333 DF PROTO=TCP SPT=55732 DPT=9100 SEQ=1158104884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BD7B60000000001030307) Oct 5 05:06:28 localhost systemd[1]: libpod-93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.scope: Deactivated successfully. Oct 5 05:06:28 localhost podman[109192]: 2025-10-05 09:06:28.205662893 +0000 UTC m=+0.079510937 container died 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vendor=Red Hat, Inc.) Oct 5 05:06:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.timer: Deactivated successfully. Oct 5 05:06:28 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0. Oct 5 05:06:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Failed to open /run/systemd/transient/93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: No such file or directory Oct 5 05:06:28 localhost systemd[1]: tmp-crun.QCrd6m.mount: Deactivated successfully. Oct 5 05:06:28 localhost podman[109192]: 2025-10-05 09:06:28.264166062 +0000 UTC m=+0.138014106 container cleanup 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, distribution-scope=public) Oct 5 05:06:28 localhost podman[109192]: logrotate_crond Oct 5 05:06:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.timer: Failed to open /run/systemd/transient/93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.timer: No such file or directory Oct 5 05:06:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Failed to open /run/systemd/transient/93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: No such file or directory Oct 5 05:06:28 localhost podman[109207]: 2025-10-05 09:06:28.301842653 +0000 UTC m=+0.084299357 container cleanup 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Oct 5 05:06:28 localhost systemd[1]: libpod-conmon-93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.scope: Deactivated successfully. Oct 5 05:06:28 localhost podman[109236]: error opening file `/run/crun/93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0/status`: No such file or directory Oct 5 05:06:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.timer: Failed to open /run/systemd/transient/93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.timer: No such file or directory Oct 5 05:06:28 localhost systemd[1]: 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: Failed to open /run/systemd/transient/93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0.service: No such file or directory Oct 5 05:06:28 localhost podman[109224]: 2025-10-05 09:06:28.405726515 +0000 UTC m=+0.074616313 container cleanup 93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, container_name=logrotate_crond, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container) Oct 5 05:06:28 localhost podman[109224]: logrotate_crond Oct 5 05:06:28 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Oct 5 05:06:28 localhost systemd[1]: Stopped logrotate_crond container. Oct 5 05:06:29 localhost python3.9[109330]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:29 localhost systemd[1]: var-lib-containers-storage-overlay-f55b66b4cc27e216ee661f88e3740f080132b0ec881f50e70b03e2853c0d8b80-merged.mount: Deactivated successfully. Oct 5 05:06:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0-userdata-shm.mount: Deactivated successfully. Oct 5 05:06:30 localhost systemd[1]: Reloading. Oct 5 05:06:30 localhost systemd-sysv-generator[109359]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:06:30 localhost systemd-rc-local-generator[109355]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:06:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:06:30 localhost systemd[1]: Stopping metrics_qdr container... Oct 5 05:06:30 localhost kernel: qdrouterd[56042]: segfault at 0 ip 00007f64db2fb7cb sp 00007fff21298820 error 4 in libc.so.6[7f64db298000+175000] Oct 5 05:06:30 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Oct 5 05:06:30 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Oct 5 05:06:30 localhost systemd[1]: Started Process Core Dump (PID 109385/UID 0). Oct 5 05:06:30 localhost systemd-coredump[109386]: Resource limits disable core dumping for process 56042 (qdrouterd). Oct 5 05:06:30 localhost systemd-coredump[109386]: Process 56042 (qdrouterd) of user 42465 dumped core. Oct 5 05:06:30 localhost systemd[1]: systemd-coredump@0-109385-0.service: Deactivated successfully. Oct 5 05:06:30 localhost podman[109371]: 2025-10-05 09:06:30.790161885 +0000 UTC m=+0.214936990 container died 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:59, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, config_id=tripleo_step1) Oct 5 05:06:30 localhost systemd[1]: libpod-9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.scope: Deactivated successfully. Oct 5 05:06:30 localhost systemd[1]: libpod-9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.scope: Consumed 28.944s CPU time. Oct 5 05:06:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.timer: Deactivated successfully. Oct 5 05:06:30 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c. Oct 5 05:06:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Failed to open /run/systemd/transient/9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: No such file or directory Oct 5 05:06:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c-userdata-shm.mount: Deactivated successfully. Oct 5 05:06:30 localhost podman[109371]: 2025-10-05 09:06:30.8316281 +0000 UTC m=+0.256403165 container cleanup 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1, tcib_managed=true, build-date=2025-07-21T13:07:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Oct 5 05:06:30 localhost podman[109371]: metrics_qdr Oct 5 05:06:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.timer: Failed to open /run/systemd/transient/9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.timer: No such file or directory Oct 5 05:06:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Failed to open /run/systemd/transient/9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: No such file or directory Oct 5 05:06:30 localhost podman[109390]: 2025-10-05 09:06:30.868407445 +0000 UTC m=+0.065339627 container cleanup 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20250721.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.9, name=rhosp17/openstack-qdrouterd, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed, config_id=tripleo_step1) Oct 5 05:06:30 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Oct 5 05:06:30 localhost systemd[1]: libpod-conmon-9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.scope: Deactivated successfully. Oct 5 05:06:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.timer: Failed to open /run/systemd/transient/9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.timer: No such file or directory Oct 5 05:06:30 localhost systemd[1]: 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: Failed to open /run/systemd/transient/9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c.service: No such file or directory Oct 5 05:06:30 localhost podman[109401]: 2025-10-05 09:06:30.963492276 +0000 UTC m=+0.066785968 container cleanup 9d0cf09c047ee1ac61282dd8e25c18e0b094b1f505a3d46dec618b474c84754c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-07-21T13:07:59, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '10ed3ae740a3c584de5be73e09f3fdc3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, release=1, com.redhat.component=openstack-qdrouterd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-qdrouterd/images/17.1.9-1, batch=17.1_20250721.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4a9cf7084a7631a8cf28014f76f8f9d6da5b1fed) Oct 5 05:06:30 localhost podman[109401]: metrics_qdr Oct 5 05:06:30 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Oct 5 05:06:30 localhost systemd[1]: Stopped metrics_qdr container. Oct 5 05:06:31 localhost systemd[1]: var-lib-containers-storage-overlay-92c9c6b2f01f047207aca223ed13c75d75c3b5dfe8b2b9d0938721ee5dd381ac-merged.mount: Deactivated successfully. Oct 5 05:06:31 localhost python3.9[109505]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:06:31 localhost podman[109544]: 2025-10-05 09:06:31.91999426 +0000 UTC m=+0.077367098 container health_status 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Oct 5 05:06:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42222 DF PROTO=TCP SPT=51992 DPT=9102 SEQ=542589826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BE6B60000000001030307) Oct 5 05:06:31 localhost podman[109544]: 2025-10-05 09:06:31.972228958 +0000 UTC m=+0.129601786 container exec_died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 05:06:31 localhost podman[109544]: unhealthy Oct 5 05:06:31 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:31 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:06:32 localhost python3.9[109620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:33 localhost python3.9[109713]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45907 DF PROTO=TCP SPT=51284 DPT=9101 SEQ=2721085463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BEEC30000000001030307) Oct 5 05:06:34 localhost python3.9[109806]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:06:34 localhost systemd[1]: Reloading. Oct 5 05:06:35 localhost systemd-rc-local-generator[109832]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:06:35 localhost systemd-sysv-generator[109836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:06:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:06:35 localhost systemd[1]: Stopping nova_compute container... Oct 5 05:06:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45909 DF PROTO=TCP SPT=51284 DPT=9101 SEQ=2721085463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74BFAB70000000001030307) Oct 5 05:06:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45910 DF PROTO=TCP SPT=51284 DPT=9101 SEQ=2721085463 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C0A760000000001030307) Oct 5 05:06:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20235 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=680244726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C206A0000000001030307) Oct 5 05:06:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20236 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=680244726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C24760000000001030307) Oct 5 05:06:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20237 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=680244726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C2C760000000001030307) Oct 5 05:06:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:06:52 localhost podman[109937]: 2025-10-05 09:06:52.909002607 +0000 UTC m=+0.076904815 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1, vcs-type=git, version=17.1.9, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 05:06:52 localhost podman[109937]: 2025-10-05 09:06:52.928177182 +0000 UTC m=+0.096079370 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, container_name=ovn_metadata_agent, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1) Oct 5 05:06:52 localhost podman[109937]: unhealthy Oct 5 05:06:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:52 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:06:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20238 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=680244726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C3C360000000001030307) Oct 5 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:06:55 localhost podman[109957]: 2025-10-05 09:06:55.16298353 +0000 UTC m=+0.079299890 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.9, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1) Oct 5 05:06:55 localhost podman[109957]: 2025-10-05 09:06:55.207324483 +0000 UTC m=+0.123640833 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:06:55 localhost podman[109957]: unhealthy Oct 5 05:06:55 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:06:55 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:06:55 localhost systemd[1]: tmp-crun.dVRyq2.mount: Deactivated successfully. Oct 5 05:06:55 localhost podman[109977]: 2025-10-05 09:06:55.910340032 +0000 UTC m=+0.077926312 container health_status 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37) Oct 5 05:06:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52763 DF PROTO=TCP SPT=41166 DPT=9882 SEQ=2664008356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C44AC0000000001030307) Oct 5 05:06:56 localhost podman[109977]: 2025-10-05 09:06:56.302198371 +0000 UTC m=+0.469784611 container exec_died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible) Oct 5 05:06:56 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Deactivated successfully. Oct 5 05:06:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43478 DF PROTO=TCP SPT=41944 DPT=9100 SEQ=105700007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C4CB60000000001030307) Oct 5 05:07:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18834 DF PROTO=TCP SPT=58184 DPT=9102 SEQ=751163550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C5BF60000000001030307) Oct 5 05:07:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:07:02 localhost podman[109999]: Error: container 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef is not running Oct 5 05:07:02 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Main process exited, code=exited, status=125/n/a Oct 5 05:07:02 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed with result 'exit-code'. Oct 5 05:07:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58505 DF PROTO=TCP SPT=40686 DPT=9101 SEQ=3270371282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C63F30000000001030307) Oct 5 05:07:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58507 DF PROTO=TCP SPT=40686 DPT=9101 SEQ=3270371282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C6FF60000000001030307) Oct 5 05:07:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58508 DF PROTO=TCP SPT=40686 DPT=9101 SEQ=3270371282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C7FB70000000001030307) Oct 5 05:07:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55743 DF PROTO=TCP SPT=36544 DPT=9105 SEQ=1268370098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C959A0000000001030307) Oct 5 05:07:17 localhost podman[109847]: time="2025-10-05T09:07:17Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Oct 5 05:07:17 localhost systemd[1]: libpod-700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.scope: Deactivated successfully. Oct 5 05:07:17 localhost systemd[1]: libpod-700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.scope: Consumed 27.955s CPU time. Oct 5 05:07:17 localhost podman[109847]: 2025-10-05 09:07:17.341636131 +0000 UTC m=+42.085510940 container died 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5) Oct 5 05:07:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.timer: Deactivated successfully. Oct 5 05:07:17 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef. Oct 5 05:07:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed to open /run/systemd/transient/700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: No such file or directory Oct 5 05:07:17 localhost systemd[1]: var-lib-containers-storage-overlay-4cbaa25cf1e4bebd8611528fd028e796ee83b34c4bc80959cdc10d28c4b2f1ae-merged.mount: Deactivated successfully. Oct 5 05:07:17 localhost podman[109847]: 2025-10-05 09:07:17.395529605 +0000 UTC m=+42.139404414 container cleanup 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:07:17 localhost podman[109847]: nova_compute Oct 5 05:07:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.timer: Failed to open /run/systemd/transient/700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.timer: No such file or directory Oct 5 05:07:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed to open /run/systemd/transient/700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: No such file or directory Oct 5 05:07:17 localhost podman[110011]: 2025-10-05 09:07:17.429950087 +0000 UTC m=+0.077916953 container cleanup 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, config_id=tripleo_step5, version=17.1.9, architecture=x86_64, container_name=nova_compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:07:17 localhost systemd[1]: libpod-conmon-700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.scope: Deactivated successfully. Oct 5 05:07:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.timer: Failed to open /run/systemd/transient/700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.timer: No such file or directory Oct 5 05:07:17 localhost systemd[1]: 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: Failed to open /run/systemd/transient/700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef.service: No such file or directory Oct 5 05:07:17 localhost podman[110024]: 2025-10-05 09:07:17.535942706 +0000 UTC m=+0.069567264 container cleanup 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, version=17.1.9, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc.) Oct 5 05:07:17 localhost podman[110024]: nova_compute Oct 5 05:07:17 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Oct 5 05:07:17 localhost systemd[1]: Stopped nova_compute container. Oct 5 05:07:17 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.132s CPU time, no IO. Oct 5 05:07:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55744 DF PROTO=TCP SPT=36544 DPT=9105 SEQ=1268370098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74C99B60000000001030307) Oct 5 05:07:18 localhost python3.9[110127]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:07:19 localhost systemd[1]: Reloading. Oct 5 05:07:19 localhost systemd-sysv-generator[110153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:07:19 localhost systemd-rc-local-generator[110149]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:07:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:07:19 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:07:19 localhost systemd[1]: Stopping nova_migration_target container... Oct 5 05:07:19 localhost recover_tripleo_nova_virtqemud[110170]: 63458 Oct 5 05:07:19 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:07:19 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:07:19 localhost systemd[1]: libpod-69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.scope: Deactivated successfully. Oct 5 05:07:19 localhost systemd[1]: libpod-69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.scope: Consumed 34.625s CPU time. Oct 5 05:07:19 localhost podman[110169]: 2025-10-05 09:07:19.722292849 +0000 UTC m=+0.083226717 container died 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=) Oct 5 05:07:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.timer: Deactivated successfully. Oct 5 05:07:19 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655. Oct 5 05:07:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Failed to open /run/systemd/transient/69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: No such file or directory Oct 5 05:07:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655-userdata-shm.mount: Deactivated successfully. Oct 5 05:07:19 localhost systemd[1]: var-lib-containers-storage-overlay-e8d5660b8fd17c472ba639c36602afe3ef86a2b23ac8f1b2407f6d07d573e2fc-merged.mount: Deactivated successfully. Oct 5 05:07:19 localhost podman[110169]: 2025-10-05 09:07:19.78300121 +0000 UTC m=+0.143935118 container cleanup 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1) Oct 5 05:07:19 localhost podman[110169]: nova_migration_target Oct 5 05:07:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.timer: Failed to open /run/systemd/transient/69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.timer: No such file or directory Oct 5 05:07:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Failed to open /run/systemd/transient/69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: No such file or directory Oct 5 05:07:19 localhost podman[110183]: 2025-10-05 09:07:19.805093904 +0000 UTC m=+0.072582536 container cleanup 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12) Oct 5 05:07:19 localhost systemd[1]: libpod-conmon-69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.scope: Deactivated successfully. Oct 5 05:07:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55745 DF PROTO=TCP SPT=36544 DPT=9105 SEQ=1268370098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CA1B60000000001030307) Oct 5 05:07:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.timer: Failed to open /run/systemd/transient/69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.timer: No such file or directory Oct 5 05:07:19 localhost systemd[1]: 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: Failed to open /run/systemd/transient/69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655.service: No such file or directory Oct 5 05:07:19 localhost podman[110195]: 2025-10-05 09:07:19.909546711 +0000 UTC m=+0.072283119 container cleanup 69cccde27e47fa5a02d7f6c04a4b7ead3dd5bcc69d56eedce6e9bb8b14b6a655 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public) Oct 5 05:07:19 localhost podman[110195]: nova_migration_target Oct 5 05:07:19 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Oct 5 05:07:19 localhost systemd[1]: Stopped nova_migration_target container. Oct 5 05:07:20 localhost python3.9[110297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:07:20 localhost systemd[1]: Reloading. Oct 5 05:07:20 localhost systemd-sysv-generator[110331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:07:20 localhost systemd-rc-local-generator[110327]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:07:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:07:21 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Oct 5 05:07:21 localhost systemd[1]: libpod-083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd.scope: Deactivated successfully. Oct 5 05:07:21 localhost podman[110339]: 2025-10-05 09:07:21.185576095 +0000 UTC m=+0.070261543 container died 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, release=2, container_name=nova_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}) Oct 5 05:07:21 localhost podman[110339]: 2025-10-05 09:07:21.223770209 +0000 UTC m=+0.108455637 container cleanup 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2) Oct 5 05:07:21 localhost podman[110339]: nova_virtlogd_wrapper Oct 5 05:07:21 localhost podman[110351]: 2025-10-05 09:07:21.248957698 +0000 UTC m=+0.058455050 container cleanup 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=2, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 05:07:22 localhost systemd[1]: var-lib-containers-storage-overlay-cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6-merged.mount: Deactivated successfully. Oct 5 05:07:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd-userdata-shm.mount: Deactivated successfully. Oct 5 05:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:07:23 localhost podman[110368]: 2025-10-05 09:07:23.404837947 +0000 UTC m=+0.076767211 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Oct 5 05:07:23 localhost podman[110368]: 2025-10-05 09:07:23.422350436 +0000 UTC m=+0.094279700 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3) Oct 5 05:07:23 localhost podman[110368]: unhealthy Oct 5 05:07:23 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:23 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:07:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55746 DF PROTO=TCP SPT=36544 DPT=9105 SEQ=1268370098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CB1760000000001030307) Oct 5 05:07:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:07:25 localhost podman[110388]: 2025-10-05 09:07:25.6653821 +0000 UTC m=+0.086006304 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Oct 5 05:07:25 localhost podman[110388]: 2025-10-05 09:07:25.708369775 +0000 UTC m=+0.128993979 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Oct 5 05:07:25 localhost podman[110388]: unhealthy Oct 5 05:07:25 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:25 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:07:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54280 DF PROTO=TCP SPT=54906 DPT=9882 SEQ=1938058205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CB9DC0000000001030307) Oct 5 05:07:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56817 DF PROTO=TCP SPT=38104 DPT=9100 SEQ=2626924527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CC1F60000000001030307) Oct 5 05:07:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40382 DF PROTO=TCP SPT=41276 DPT=9102 SEQ=2700456863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CD1360000000001030307) Oct 5 05:07:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24223 DF PROTO=TCP SPT=57724 DPT=9101 SEQ=3684568959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CD9230000000001030307) Oct 5 05:07:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24225 DF PROTO=TCP SPT=57724 DPT=9101 SEQ=3684568959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CE5370000000001030307) Oct 5 05:07:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24226 DF PROTO=TCP SPT=57724 DPT=9101 SEQ=3684568959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74CF4F60000000001030307) Oct 5 05:07:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23248 DF PROTO=TCP SPT=46600 DPT=9105 SEQ=2504758202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D0ACD0000000001030307) Oct 5 05:07:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23249 DF PROTO=TCP SPT=46600 DPT=9105 SEQ=2504758202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D0EB60000000001030307) Oct 5 05:07:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23250 DF PROTO=TCP SPT=46600 DPT=9105 SEQ=2504758202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D16B60000000001030307) Oct 5 05:07:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23251 DF PROTO=TCP SPT=46600 DPT=9105 SEQ=2504758202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D26760000000001030307) Oct 5 05:07:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:07:53 localhost systemd[1]: tmp-crun.kjdpE1.mount: Deactivated successfully. Oct 5 05:07:53 localhost podman[110537]: 2025-10-05 09:07:53.923579803 +0000 UTC m=+0.091624627 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1) Oct 5 05:07:53 localhost podman[110537]: 2025-10-05 09:07:53.966421145 +0000 UTC m=+0.134465999 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 05:07:53 localhost podman[110537]: unhealthy Oct 5 05:07:53 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:53 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:07:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:07:55 localhost podman[110556]: 2025-10-05 09:07:55.914136171 +0000 UTC m=+0.079435674 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vcs-type=git, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vendor=Red Hat, Inc.) Oct 5 05:07:55 localhost podman[110556]: 2025-10-05 09:07:55.953219711 +0000 UTC m=+0.118519174 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public) Oct 5 05:07:55 localhost podman[110556]: unhealthy Oct 5 05:07:55 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:07:55 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:07:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13436 DF PROTO=TCP SPT=36118 DPT=9882 SEQ=3103759433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D2F0C0000000001030307) Oct 5 05:07:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17202 DF PROTO=TCP SPT=42892 DPT=9100 SEQ=2719497869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D37360000000001030307) Oct 5 05:08:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30749 DF PROTO=TCP SPT=50456 DPT=9102 SEQ=3411536285 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D46370000000001030307) Oct 5 05:08:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8073 DF PROTO=TCP SPT=50146 DPT=9101 SEQ=873563459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D4E520000000001030307) Oct 5 05:08:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8075 DF PROTO=TCP SPT=50146 DPT=9101 SEQ=873563459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D5A770000000001030307) Oct 5 05:08:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8076 DF PROTO=TCP SPT=50146 DPT=9101 SEQ=873563459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D6A370000000001030307) Oct 5 05:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37816 DF PROTO=TCP SPT=53194 DPT=9105 SEQ=2817586527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D7FFA0000000001030307) Oct 5 05:08:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37817 DF PROTO=TCP SPT=53194 DPT=9105 SEQ=2817586527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D83F60000000001030307) Oct 5 05:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37818 DF PROTO=TCP SPT=53194 DPT=9105 SEQ=2817586527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D8BF70000000001030307) Oct 5 05:08:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37819 DF PROTO=TCP SPT=53194 DPT=9105 SEQ=2817586527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74D9BB60000000001030307) Oct 5 05:08:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:08:24 localhost podman[110576]: 2025-10-05 09:08:24.149552689 +0000 UTC m=+0.069197486 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:08:24 localhost podman[110576]: 2025-10-05 09:08:24.164035315 +0000 UTC m=+0.083680122 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, architecture=x86_64, distribution-scope=public) Oct 5 05:08:24 localhost podman[110576]: unhealthy Oct 5 05:08:24 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:24 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:08:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14626 DF PROTO=TCP SPT=33094 DPT=9100 SEQ=3840159291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DA4360000000001030307) Oct 5 05:08:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:08:26 localhost podman[110596]: 2025-10-05 09:08:26.409424651 +0000 UTC m=+0.078812079 container health_status 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, release=1) Oct 5 05:08:26 localhost podman[110596]: 2025-10-05 09:08:26.423081536 +0000 UTC m=+0.092468974 container exec_died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1) Oct 5 05:08:26 localhost podman[110596]: unhealthy Oct 5 05:08:26 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:26 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed with result 'exit-code'. Oct 5 05:08:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14627 DF PROTO=TCP SPT=33094 DPT=9100 SEQ=3840159291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DAC360000000001030307) Oct 5 05:08:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9884 DF PROTO=TCP SPT=34506 DPT=9102 SEQ=3298400356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DBB760000000001030307) Oct 5 05:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9885 DF PROTO=TCP SPT=34506 DPT=9102 SEQ=3298400356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DC3770000000001030307) Oct 5 05:08:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55442 DF PROTO=TCP SPT=39510 DPT=9101 SEQ=91607441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DCF760000000001030307) Oct 5 05:08:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55443 DF PROTO=TCP SPT=39510 DPT=9101 SEQ=91607441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DDF360000000001030307) Oct 5 05:08:45 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Oct 5 05:08:45 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 62748 (conmon) with signal SIGKILL. Oct 5 05:08:45 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Oct 5 05:08:45 localhost systemd[1]: libpod-conmon-083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd.scope: Deactivated successfully. Oct 5 05:08:45 localhost podman[110706]: error opening file `/run/crun/083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd/status`: No such file or directory Oct 5 05:08:45 localhost podman[110693]: 2025-10-05 09:08:45.384398009 +0000 UTC m=+0.057528827 container cleanup 083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, container_name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, name=rhosp17/openstack-nova-libvirt, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Oct 5 05:08:45 localhost podman[110693]: nova_virtlogd_wrapper Oct 5 05:08:45 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Oct 5 05:08:45 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Oct 5 05:08:46 localhost python3.9[110799]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:46 localhost systemd[1]: Reloading. Oct 5 05:08:46 localhost systemd-rc-local-generator[110825]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:46 localhost systemd-sysv-generator[110829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:46 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Oct 5 05:08:46 localhost systemd[1]: Stopping nova_virtnodedevd container... Oct 5 05:08:46 localhost recover_tripleo_nova_virtqemud[110840]: 63458 Oct 5 05:08:46 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Oct 5 05:08:46 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Oct 5 05:08:46 localhost systemd[1]: libpod-2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23.scope: Deactivated successfully. Oct 5 05:08:46 localhost systemd[1]: libpod-2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23.scope: Consumed 1.406s CPU time. Oct 5 05:08:46 localhost podman[110842]: 2025-10-05 09:08:46.632593358 +0000 UTC m=+0.079962570 container died 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=2, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-libvirt) Oct 5 05:08:46 localhost systemd[1]: tmp-crun.RaPXiH.mount: Deactivated successfully. Oct 5 05:08:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:46 localhost podman[110842]: 2025-10-05 09:08:46.671325409 +0000 UTC m=+0.118694601 container cleanup 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, container_name=nova_virtnodedevd, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., version=17.1.9) Oct 5 05:08:46 localhost podman[110842]: nova_virtnodedevd Oct 5 05:08:46 localhost podman[110857]: 2025-10-05 09:08:46.71375277 +0000 UTC m=+0.071316823 container cleanup 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_virtnodedevd, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, distribution-scope=public) Oct 5 05:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15912 DF PROTO=TCP SPT=40058 DPT=9105 SEQ=3876163139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DF52A0000000001030307) Oct 5 05:08:46 localhost systemd[1]: libpod-conmon-2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23.scope: Deactivated successfully. Oct 5 05:08:46 localhost podman[110884]: error opening file `/run/crun/2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23/status`: No such file or directory Oct 5 05:08:46 localhost podman[110872]: 2025-10-05 09:08:46.819364233 +0000 UTC m=+0.067682884 container cleanup 2633464c108ae1bae5158354bd3d6e5d9cb245388d04de4df6783dc1c1710a23 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_virtnodedevd, release=2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 05:08:46 localhost podman[110872]: nova_virtnodedevd Oct 5 05:08:46 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Oct 5 05:08:46 localhost systemd[1]: Stopped nova_virtnodedevd container. Oct 5 05:08:47 localhost python3.9[110977]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:47 localhost systemd[1]: var-lib-containers-storage-overlay-24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d-merged.mount: Deactivated successfully. Oct 5 05:08:47 localhost systemd[1]: Reloading. Oct 5 05:08:47 localhost systemd-rc-local-generator[111004]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:47 localhost systemd-sysv-generator[111007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15913 DF PROTO=TCP SPT=40058 DPT=9105 SEQ=3876163139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74DF9360000000001030307) Oct 5 05:08:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:47 localhost systemd[1]: Stopping nova_virtproxyd container... Oct 5 05:08:48 localhost systemd[1]: libpod-9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413.scope: Deactivated successfully. Oct 5 05:08:48 localhost podman[111018]: 2025-10-05 09:08:48.063298576 +0000 UTC m=+0.084355581 container died 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, release=2, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_virtproxyd, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:08:48 localhost podman[111018]: 2025-10-05 09:08:48.102996303 +0000 UTC m=+0.124053278 container cleanup 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtproxyd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, vcs-type=git, release=2, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:08:48 localhost podman[111018]: nova_virtproxyd Oct 5 05:08:48 localhost podman[111033]: 2025-10-05 09:08:48.148402116 +0000 UTC m=+0.069428672 container cleanup 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, container_name=nova_virtproxyd, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, vcs-type=git, release=2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, version=17.1.9) Oct 5 05:08:48 localhost systemd[1]: libpod-conmon-9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413.scope: Deactivated successfully. Oct 5 05:08:48 localhost podman[111061]: error opening file `/run/crun/9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413/status`: No such file or directory Oct 5 05:08:48 localhost podman[111048]: 2025-10-05 09:08:48.25409233 +0000 UTC m=+0.069751951 container cleanup 9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, container_name=nova_virtproxyd, distribution-scope=public, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.9, vendor=Red Hat, Inc.) Oct 5 05:08:48 localhost podman[111048]: nova_virtproxyd Oct 5 05:08:48 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Oct 5 05:08:48 localhost systemd[1]: Stopped nova_virtproxyd container. Oct 5 05:08:48 localhost systemd[1]: var-lib-containers-storage-overlay-94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af-merged.mount: Deactivated successfully. Oct 5 05:08:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b40048d5fe0809f289d151e6e7f5330b7a604f2ff5e35091e673571499a1413-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:49 localhost python3.9[111154]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:49 localhost systemd[1]: Reloading. Oct 5 05:08:49 localhost systemd-rc-local-generator[111184]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:49 localhost systemd-sysv-generator[111187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Oct 5 05:08:49 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Oct 5 05:08:49 localhost systemd[1]: Stopping nova_virtqemud container... Oct 5 05:08:49 localhost systemd[1]: libpod-e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c.scope: Deactivated successfully. Oct 5 05:08:49 localhost systemd[1]: libpod-e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c.scope: Consumed 2.272s CPU time. Oct 5 05:08:49 localhost podman[111195]: 2025-10-05 09:08:49.537446073 +0000 UTC m=+0.083298372 container died e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, release=2, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, tcib_managed=true, config_id=tripleo_step3) Oct 5 05:08:49 localhost podman[111195]: 2025-10-05 09:08:49.576750219 +0000 UTC m=+0.122602488 container cleanup e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_id=tripleo_step3) Oct 5 05:08:49 localhost podman[111195]: nova_virtqemud Oct 5 05:08:49 localhost systemd[1]: var-lib-containers-storage-overlay-78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a-merged.mount: Deactivated successfully. Oct 5 05:08:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:49 localhost podman[111208]: 2025-10-05 09:08:49.628102026 +0000 UTC m=+0.080619689 container cleanup e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, release=2, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, version=17.1.9, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, container_name=nova_virtqemud) Oct 5 05:08:49 localhost systemd[1]: libpod-conmon-e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c.scope: Deactivated successfully. Oct 5 05:08:49 localhost podman[111238]: error opening file `/run/crun/e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c/status`: No such file or directory Oct 5 05:08:49 localhost podman[111225]: 2025-10-05 09:08:49.743272129 +0000 UTC m=+0.078322935 container cleanup e5004871a22f1675c3ad41755a339f006e24803bff1db6e593c96d6dc1b35e0c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_virtqemud, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9) Oct 5 05:08:49 localhost podman[111225]: nova_virtqemud Oct 5 05:08:49 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Oct 5 05:08:49 localhost systemd[1]: Stopped nova_virtqemud container. Oct 5 05:08:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15914 DF PROTO=TCP SPT=40058 DPT=9105 SEQ=3876163139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E01360000000001030307) Oct 5 05:08:50 localhost python3.9[111331]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:50 localhost systemd[1]: Reloading. Oct 5 05:08:50 localhost systemd-sysv-generator[111363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:50 localhost systemd-rc-local-generator[111358]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:51 localhost python3.9[111461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:51 localhost systemd[1]: Reloading. Oct 5 05:08:51 localhost systemd-rc-local-generator[111487]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:51 localhost systemd-sysv-generator[111491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:52 localhost systemd[1]: Stopping nova_virtsecretd container... Oct 5 05:08:52 localhost systemd[1]: libpod-0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa.scope: Deactivated successfully. Oct 5 05:08:52 localhost podman[111502]: 2025-10-05 09:08:52.272368055 +0000 UTC m=+0.061452374 container died 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:56:59, architecture=x86_64, container_name=nova_virtsecretd, version=17.1.9) Oct 5 05:08:52 localhost podman[111502]: 2025-10-05 09:08:52.309689956 +0000 UTC m=+0.098774245 container cleanup 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, container_name=nova_virtsecretd, release=2, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:08:52 localhost podman[111502]: nova_virtsecretd Oct 5 05:08:52 localhost podman[111515]: 2025-10-05 09:08:52.364169248 +0000 UTC m=+0.077062391 container cleanup 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, container_name=nova_virtsecretd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, config_id=tripleo_step3, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true) Oct 5 05:08:52 localhost systemd[1]: libpod-conmon-0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa.scope: Deactivated successfully. Oct 5 05:08:52 localhost podman[111542]: error opening file `/run/crun/0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa/status`: No such file or directory Oct 5 05:08:52 localhost podman[111531]: 2025-10-05 09:08:52.46210751 +0000 UTC m=+0.070140022 container cleanup 0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, release=2, container_name=nova_virtsecretd, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible) Oct 5 05:08:52 localhost podman[111531]: nova_virtsecretd Oct 5 05:08:52 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Oct 5 05:08:52 localhost systemd[1]: Stopped nova_virtsecretd container. Oct 5 05:08:53 localhost systemd[1]: var-lib-containers-storage-overlay-d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684-merged.mount: Deactivated successfully. Oct 5 05:08:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:53 localhost python3.9[111635]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15915 DF PROTO=TCP SPT=40058 DPT=9105 SEQ=3876163139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E10F60000000001030307) Oct 5 05:08:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:08:54 localhost systemd[1]: Reloading. Oct 5 05:08:54 localhost podman[111638]: 2025-10-05 09:08:54.432739482 +0000 UTC m=+0.092567355 container health_status 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1) Oct 5 05:08:54 localhost podman[111638]: 2025-10-05 09:08:54.45054563 +0000 UTC m=+0.110373503 container exec_died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Oct 5 05:08:54 localhost podman[111638]: unhealthy Oct 5 05:08:54 localhost systemd-rc-local-generator[111676]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:54 localhost systemd-sysv-generator[111681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:54 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:08:54 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed with result 'exit-code'. Oct 5 05:08:54 localhost systemd[1]: Stopping nova_virtstoraged container... Oct 5 05:08:54 localhost systemd[1]: libpod-7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028.scope: Deactivated successfully. Oct 5 05:08:54 localhost podman[111693]: 2025-10-05 09:08:54.779727994 +0000 UTC m=+0.068869357 container died 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, build-date=2025-07-21T14:56:59, container_name=nova_virtstoraged, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, release=2, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public) Oct 5 05:08:54 localhost systemd[1]: tmp-crun.g2vNw7.mount: Deactivated successfully. Oct 5 05:08:54 localhost podman[111693]: 2025-10-05 09:08:54.818128795 +0000 UTC m=+0.107270138 container cleanup 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, release=2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, container_name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc.) Oct 5 05:08:54 localhost podman[111693]: nova_virtstoraged Oct 5 05:08:54 localhost podman[111708]: 2025-10-05 09:08:54.863537989 +0000 UTC m=+0.073976867 container cleanup 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=nova_virtstoraged, release=2, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Oct 5 05:08:54 localhost systemd[1]: libpod-conmon-7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028.scope: Deactivated successfully. Oct 5 05:08:54 localhost podman[111736]: error opening file `/run/crun/7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028/status`: No such file or directory Oct 5 05:08:54 localhost podman[111724]: 2025-10-05 09:08:54.968287947 +0000 UTC m=+0.067203701 container cleanup 7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, tcib_managed=true, container_name=nova_virtstoraged, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5d5b173631792e25c080b07e9b3e041b'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, version=17.1.9, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2) Oct 5 05:08:54 localhost podman[111724]: nova_virtstoraged Oct 5 05:08:54 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Oct 5 05:08:54 localhost systemd[1]: Stopped nova_virtstoraged container. Oct 5 05:08:55 localhost systemd[1]: var-lib-containers-storage-overlay-d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843-merged.mount: Deactivated successfully. Oct 5 05:08:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:55 localhost python3.9[111829]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:55 localhost systemd[1]: Reloading. Oct 5 05:08:55 localhost systemd-rc-local-generator[111856]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:55 localhost systemd-sysv-generator[111860]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26001 DF PROTO=TCP SPT=34648 DPT=9882 SEQ=697817677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E196C0000000001030307) Oct 5 05:08:56 localhost systemd[1]: Stopping ovn_controller container... Oct 5 05:08:56 localhost systemd[1]: libpod-2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.scope: Deactivated successfully. Oct 5 05:08:56 localhost systemd[1]: libpod-2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.scope: Consumed 2.383s CPU time. Oct 5 05:08:56 localhost podman[111870]: 2025-10-05 09:08:56.217381362 +0000 UTC m=+0.070903694 container died 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 05:08:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.timer: Deactivated successfully. Oct 5 05:08:56 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c. Oct 5 05:08:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed to open /run/systemd/transient/2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: No such file or directory Oct 5 05:08:56 localhost podman[111870]: 2025-10-05 09:08:56.259583447 +0000 UTC m=+0.113105759 container cleanup 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44) Oct 5 05:08:56 localhost podman[111870]: ovn_controller Oct 5 05:08:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.timer: Failed to open /run/systemd/transient/2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.timer: No such file or directory Oct 5 05:08:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed to open /run/systemd/transient/2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: No such file or directory Oct 5 05:08:56 localhost podman[111882]: 2025-10-05 09:08:56.310110181 +0000 UTC m=+0.079665773 container cleanup 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller) Oct 5 05:08:56 localhost systemd[1]: libpod-conmon-2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.scope: Deactivated successfully. Oct 5 05:08:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.timer: Failed to open /run/systemd/transient/2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.timer: No such file or directory Oct 5 05:08:56 localhost systemd[1]: 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: Failed to open /run/systemd/transient/2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c.service: No such file or directory Oct 5 05:08:56 localhost podman[111894]: 2025-10-05 09:08:56.410933692 +0000 UTC m=+0.073776121 container cleanup 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, version=17.1.9) Oct 5 05:08:56 localhost podman[111894]: ovn_controller Oct 5 05:08:56 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Oct 5 05:08:56 localhost systemd[1]: Stopped ovn_controller container. Oct 5 05:08:56 localhost systemd[1]: var-lib-containers-storage-overlay-0afdc6c9db300a9a5bad1fd5c74a09e603d29e9c3f62337bb3767b8218877207-merged.mount: Deactivated successfully. Oct 5 05:08:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:57 localhost python3.9[111997]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:57 localhost systemd[1]: Reloading. Oct 5 05:08:57 localhost systemd-rc-local-generator[112018]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:57 localhost systemd-sysv-generator[112025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:08:57 localhost systemd[1]: Stopping ovn_metadata_agent container... Oct 5 05:08:57 localhost systemd[1]: tmp-crun.y7SNtI.mount: Deactivated successfully. Oct 5 05:08:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37764 DF PROTO=TCP SPT=38120 DPT=9100 SEQ=3589141195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E21760000000001030307) Oct 5 05:08:58 localhost systemd[1]: libpod-1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.scope: Deactivated successfully. Oct 5 05:08:58 localhost systemd[1]: libpod-1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.scope: Consumed 8.824s CPU time. Oct 5 05:08:58 localhost podman[112038]: 2025-10-05 09:08:58.301553984 +0000 UTC m=+0.767999222 container died 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12) Oct 5 05:08:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.timer: Deactivated successfully. Oct 5 05:08:58 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379. Oct 5 05:08:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed to open /run/systemd/transient/1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: No such file or directory Oct 5 05:08:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379-userdata-shm.mount: Deactivated successfully. Oct 5 05:08:58 localhost podman[112038]: 2025-10-05 09:08:58.361421702 +0000 UTC m=+0.827866950 container cleanup 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_metadata_agent, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Oct 5 05:08:58 localhost podman[112038]: ovn_metadata_agent Oct 5 05:08:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.timer: Failed to open /run/systemd/transient/1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.timer: No such file or directory Oct 5 05:08:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed to open /run/systemd/transient/1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: No such file or directory Oct 5 05:08:58 localhost podman[112052]: 2025-10-05 09:08:58.449300199 +0000 UTC m=+0.133785584 container cleanup 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Oct 5 05:08:58 localhost systemd[1]: libpod-conmon-1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.scope: Deactivated successfully. Oct 5 05:08:58 localhost podman[112082]: error opening file `/run/crun/1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379/status`: No such file or directory Oct 5 05:08:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.timer: Failed to open /run/systemd/transient/1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.timer: No such file or directory Oct 5 05:08:58 localhost systemd[1]: 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: Failed to open /run/systemd/transient/1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379.service: No such file or directory Oct 5 05:08:58 localhost podman[112070]: 2025-10-05 09:08:58.567517286 +0000 UTC m=+0.085855412 container cleanup 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, architecture=x86_64, release=1, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible) Oct 5 05:08:58 localhost podman[112070]: ovn_metadata_agent Oct 5 05:08:58 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Oct 5 05:08:58 localhost systemd[1]: Stopped ovn_metadata_agent container. Oct 5 05:08:59 localhost systemd[1]: var-lib-containers-storage-overlay-62a5eda9ac1c94d4c818199f85ccf1cfb1f0d26c0be01afb2a73d9178a056789-merged.mount: Deactivated successfully. Oct 5 05:08:59 localhost python3.9[112176]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:08:59 localhost systemd[1]: Reloading. Oct 5 05:08:59 localhost systemd-sysv-generator[112209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:08:59 localhost systemd-rc-local-generator[112206]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:08:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:09:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46234 DF PROTO=TCP SPT=46528 DPT=9102 SEQ=3836498662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E30B60000000001030307) Oct 5 05:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56426 DF PROTO=TCP SPT=43342 DPT=9101 SEQ=758948277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E38B20000000001030307) Oct 5 05:09:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56428 DF PROTO=TCP SPT=43342 DPT=9101 SEQ=758948277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E44B70000000001030307) Oct 5 05:09:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56429 DF PROTO=TCP SPT=43342 DPT=9101 SEQ=758948277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E54770000000001030307) Oct 5 05:09:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46622 DF PROTO=TCP SPT=57828 DPT=9105 SEQ=1961720035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E6A5B0000000001030307) Oct 5 05:09:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46623 DF PROTO=TCP SPT=57828 DPT=9105 SEQ=1961720035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E6E770000000001030307) Oct 5 05:09:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46624 DF PROTO=TCP SPT=57828 DPT=9105 SEQ=1961720035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E76760000000001030307) Oct 5 05:09:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46625 DF PROTO=TCP SPT=57828 DPT=9105 SEQ=1961720035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E86360000000001030307) Oct 5 05:09:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49668 DF PROTO=TCP SPT=41046 DPT=9100 SEQ=1705401752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E8AAF0000000001030307) Oct 5 05:09:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49670 DF PROTO=TCP SPT=41046 DPT=9100 SEQ=1705401752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74E96B60000000001030307) Oct 5 05:09:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28858 DF PROTO=TCP SPT=55592 DPT=9102 SEQ=2135365028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EA5F70000000001030307) Oct 5 05:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6046 DF PROTO=TCP SPT=58488 DPT=9101 SEQ=3208814058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EADE30000000001030307) Oct 5 05:09:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6048 DF PROTO=TCP SPT=58488 DPT=9101 SEQ=3208814058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EB9F60000000001030307) Oct 5 05:09:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6049 DF PROTO=TCP SPT=58488 DPT=9101 SEQ=3208814058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EC9B60000000001030307) Oct 5 05:09:44 localhost podman[112331]: 2025-10-05 09:09:44.607928874 +0000 UTC m=+0.093204174 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:09:44 localhost podman[112331]: 2025-10-05 09:09:44.70568926 +0000 UTC m=+0.190964560 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4103 DF PROTO=TCP SPT=40618 DPT=9105 SEQ=76140263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EDF8A0000000001030307) Oct 5 05:09:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4104 DF PROTO=TCP SPT=40618 DPT=9105 SEQ=76140263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EE3760000000001030307) Oct 5 05:09:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4105 DF PROTO=TCP SPT=40618 DPT=9105 SEQ=76140263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EEB760000000001030307) Oct 5 05:09:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4106 DF PROTO=TCP SPT=40618 DPT=9105 SEQ=76140263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74EFB360000000001030307) Oct 5 05:09:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48594 DF PROTO=TCP SPT=33294 DPT=9882 SEQ=1969727870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F03CC0000000001030307) Oct 5 05:09:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28846 DF PROTO=TCP SPT=55478 DPT=9100 SEQ=620429900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F0BF60000000001030307) Oct 5 05:10:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52452 DF PROTO=TCP SPT=48202 DPT=9102 SEQ=2644943064 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F1AF70000000001030307) Oct 5 05:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35751 DF PROTO=TCP SPT=55938 DPT=9101 SEQ=3153844860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F23130000000001030307) Oct 5 05:10:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35753 DF PROTO=TCP SPT=55938 DPT=9101 SEQ=3153844860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F2F360000000001030307) Oct 5 05:10:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35754 DF PROTO=TCP SPT=55938 DPT=9101 SEQ=3153844860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F3EF60000000001030307) Oct 5 05:10:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1919 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=3846471080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F54BA0000000001030307) Oct 5 05:10:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1920 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=3846471080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F58B70000000001030307) Oct 5 05:10:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1921 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=3846471080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F60B60000000001030307) Oct 5 05:10:19 localhost systemd[1]: session-37.scope: Deactivated successfully. Oct 5 05:10:19 localhost systemd[1]: session-37.scope: Consumed 18.684s CPU time. Oct 5 05:10:19 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Oct 5 05:10:19 localhost systemd-logind[760]: Removed session 37. Oct 5 05:10:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1922 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=3846471080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F70760000000001030307) Oct 5 05:10:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3000 DF PROTO=TCP SPT=54494 DPT=9100 SEQ=204510131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F78F60000000001030307) Oct 5 05:10:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3001 DF PROTO=TCP SPT=54494 DPT=9100 SEQ=204510131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F80F60000000001030307) Oct 5 05:10:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56646 DF PROTO=TCP SPT=37502 DPT=9102 SEQ=4217268277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F90360000000001030307) Oct 5 05:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46761 DF PROTO=TCP SPT=50168 DPT=9101 SEQ=1664847169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74F98430000000001030307) Oct 5 05:10:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46763 DF PROTO=TCP SPT=50168 DPT=9101 SEQ=1664847169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FA4360000000001030307) Oct 5 05:10:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46764 DF PROTO=TCP SPT=50168 DPT=9101 SEQ=1664847169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FB3F60000000001030307) Oct 5 05:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15541 DF PROTO=TCP SPT=56280 DPT=9105 SEQ=3849385783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FC9EA0000000001030307) Oct 5 05:10:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15542 DF PROTO=TCP SPT=56280 DPT=9105 SEQ=3849385783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FCDF60000000001030307) Oct 5 05:10:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15543 DF PROTO=TCP SPT=56280 DPT=9105 SEQ=3849385783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FD5F70000000001030307) Oct 5 05:10:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15544 DF PROTO=TCP SPT=56280 DPT=9105 SEQ=3849385783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FE5B60000000001030307) Oct 5 05:10:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14702 DF PROTO=TCP SPT=57016 DPT=9882 SEQ=2329404941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FEE2B0000000001030307) Oct 5 05:10:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37745 DF PROTO=TCP SPT=57100 DPT=9100 SEQ=1925381349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC74FF6370000000001030307) Oct 5 05:11:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30609 DF PROTO=TCP SPT=52700 DPT=9102 SEQ=2998010322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75005760000000001030307) Oct 5 05:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47936 DF PROTO=TCP SPT=53734 DPT=9101 SEQ=1146550672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7500D730000000001030307) Oct 5 05:11:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47938 DF PROTO=TCP SPT=53734 DPT=9101 SEQ=1146550672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75019770000000001030307) Oct 5 05:11:10 localhost sshd[112551]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:11:10 localhost systemd-logind[760]: New session 38 of user zuul. Oct 5 05:11:10 localhost systemd[1]: Started Session 38 of User zuul. Oct 5 05:11:10 localhost python3.9[112632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47939 DF PROTO=TCP SPT=53734 DPT=9101 SEQ=1146550672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75029360000000001030307) Oct 5 05:11:11 localhost python3.9[112724]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:12 localhost python3.9[112816]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:12 localhost python3.9[112908]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:13 localhost python3.9[113000]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:13 localhost python3.9[113092]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:14 localhost python3.9[113184]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:14 localhost python3.9[113276]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:15 localhost python3.9[113368]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:16 localhost python3.9[113460]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:16 localhost python3.9[113552]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29244 DF PROTO=TCP SPT=41654 DPT=9105 SEQ=628734929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7503F1A0000000001030307) Oct 5 05:11:17 localhost python3.9[113644]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29245 DF PROTO=TCP SPT=41654 DPT=9105 SEQ=628734929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75043360000000001030307) Oct 5 05:11:17 localhost python3.9[113736]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:18 localhost python3.9[113828]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:19 localhost python3.9[113920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:19 localhost python3.9[114012]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29246 DF PROTO=TCP SPT=41654 DPT=9105 SEQ=628734929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7504B360000000001030307) Oct 5 05:11:20 localhost python3.9[114104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:20 localhost python3.9[114196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:21 localhost python3.9[114288]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:22 localhost python3.9[114380]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:22 localhost python3.9[114472]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:23 localhost python3.9[114564]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29247 DF PROTO=TCP SPT=41654 DPT=9105 SEQ=628734929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7505AF60000000001030307) Oct 5 05:11:24 localhost python3.9[114656]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:24 localhost python3.9[114748]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:25 localhost python3.9[114840]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33081 DF PROTO=TCP SPT=35886 DPT=9882 SEQ=1023397912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750635B0000000001030307) Oct 5 05:11:26 localhost python3.9[114932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:26 localhost python3.9[115024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:27 localhost python3.9[115116]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:27 localhost python3.9[115208]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41924 DF PROTO=TCP SPT=56502 DPT=9100 SEQ=347506364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7506B760000000001030307) Oct 5 05:11:28 localhost python3.9[115300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:28 localhost python3.9[115392]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:29 localhost python3.9[115484]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:30 localhost python3.9[115576]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:30 localhost python3.9[115668]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:31 localhost python3.9[115760]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:31 localhost python3.9[115852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6672 DF PROTO=TCP SPT=43898 DPT=9102 SEQ=1824717981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7507AB70000000001030307) Oct 5 05:11:32 localhost python3.9[115944]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:33 localhost python3.9[116036]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:33 localhost python3.9[116128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57825 DF PROTO=TCP SPT=40742 DPT=9101 SEQ=1987164085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75082A20000000001030307) Oct 5 05:11:34 localhost python3.9[116220]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:35 localhost python3.9[116312]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:35 localhost python3.9[116404]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:11:36 localhost python3.9[116496]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57827 DF PROTO=TCP SPT=40742 DPT=9101 SEQ=1987164085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7508EB60000000001030307) Oct 5 05:11:37 localhost python3.9[116588]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:11:38 localhost python3.9[116680]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:11:38 localhost systemd[1]: Reloading. Oct 5 05:11:38 localhost systemd-rc-local-generator[116702]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:11:38 localhost systemd-sysv-generator[116705]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:11:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:11:39 localhost python3.9[116808]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:40 localhost python3.9[116901]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:40 localhost python3.9[116994]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57828 DF PROTO=TCP SPT=40742 DPT=9101 SEQ=1987164085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7509E770000000001030307) Oct 5 05:11:41 localhost python3.9[117087]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:42 localhost python3.9[117180]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:42 localhost python3.9[117273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:43 localhost python3.9[117366]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:43 localhost python3.9[117459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:44 localhost python3.9[117552]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:45 localhost python3.9[117645]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:45 localhost python3.9[117738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:46 localhost python3.9[117831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58486 DF PROTO=TCP SPT=51828 DPT=9105 SEQ=3404640301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750B44C0000000001030307) Oct 5 05:11:47 localhost python3.9[117924]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:47 localhost python3.9[118017]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58487 DF PROTO=TCP SPT=51828 DPT=9105 SEQ=3404640301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750B8360000000001030307) Oct 5 05:11:48 localhost python3.9[118123]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:48 localhost python3.9[118252]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:49 localhost python3.9[118357]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58488 DF PROTO=TCP SPT=51828 DPT=9105 SEQ=3404640301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750C0360000000001030307) Oct 5 05:11:50 localhost python3.9[118465]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:50 localhost python3.9[118558]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:51 localhost python3.9[118651]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:51 localhost python3.9[118744]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:11:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58489 DF PROTO=TCP SPT=51828 DPT=9105 SEQ=3404640301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750CFF60000000001030307) Oct 5 05:11:54 localhost systemd[1]: session-38.scope: Deactivated successfully. Oct 5 05:11:54 localhost systemd[1]: session-38.scope: Consumed 30.768s CPU time. Oct 5 05:11:54 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Oct 5 05:11:54 localhost systemd-logind[760]: Removed session 38. Oct 5 05:11:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8895 DF PROTO=TCP SPT=41064 DPT=9882 SEQ=1978054954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750D88C0000000001030307) Oct 5 05:11:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41553 DF PROTO=TCP SPT=60154 DPT=9100 SEQ=1537149247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750E0B60000000001030307) Oct 5 05:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55236 DF PROTO=TCP SPT=55122 DPT=9102 SEQ=3074053206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750EFB60000000001030307) Oct 5 05:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21188 DF PROTO=TCP SPT=49366 DPT=9101 SEQ=3110718178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC750F7D20000000001030307) Oct 5 05:12:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21190 DF PROTO=TCP SPT=49366 DPT=9101 SEQ=3110718178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75103F60000000001030307) Oct 5 05:12:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21191 DF PROTO=TCP SPT=49366 DPT=9101 SEQ=3110718178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75113B90000000001030307) Oct 5 05:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65378 DF PROTO=TCP SPT=34826 DPT=9105 SEQ=2804476235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751297B0000000001030307) Oct 5 05:12:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65379 DF PROTO=TCP SPT=34826 DPT=9105 SEQ=2804476235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7512D760000000001030307) Oct 5 05:12:19 localhost sshd[118760]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:12:19 localhost systemd-logind[760]: New session 39 of user zuul. Oct 5 05:12:19 localhost systemd[1]: Started Session 39 of User zuul. Oct 5 05:12:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65380 DF PROTO=TCP SPT=34826 DPT=9105 SEQ=2804476235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75135770000000001030307) Oct 5 05:12:20 localhost python3.9[118853]: ansible-ansible.legacy.ping Invoked with data=pong Oct 5 05:12:21 localhost python3.9[118957]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:12:22 localhost python3.9[119049]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:12:23 localhost python3.9[119142]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:12:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65381 DF PROTO=TCP SPT=34826 DPT=9105 SEQ=2804476235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75145360000000001030307) Oct 5 05:12:24 localhost python3.9[119234]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:12:24 localhost python3.9[119326]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:12:25 localhost python3.9[119399]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655544.303751-179-189855266228802/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:12:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11585 DF PROTO=TCP SPT=38198 DPT=9882 SEQ=4225814293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7514DBD0000000001030307) Oct 5 05:12:26 localhost python3.9[119491]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:12:27 localhost python3.9[119587]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:12:27 localhost python3.9[119677]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:12:28 localhost network[119694]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:12:28 localhost network[119695]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:12:28 localhost network[119696]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:12:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45878 DF PROTO=TCP SPT=33340 DPT=9100 SEQ=587504059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75155F60000000001030307) Oct 5 05:12:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39147 DF PROTO=TCP SPT=32840 DPT=9102 SEQ=542299635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75164F60000000001030307) Oct 5 05:12:33 localhost python3.9[119894]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1954 DF PROTO=TCP SPT=52616 DPT=9101 SEQ=1304292850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7516D030000000001030307) Oct 5 05:12:34 localhost python3.9[119984]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:12:35 localhost python3.9[120080]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:12:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1956 DF PROTO=TCP SPT=52616 DPT=9101 SEQ=1304292850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75178F70000000001030307) Oct 5 05:12:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1957 DF PROTO=TCP SPT=52616 DPT=9101 SEQ=1304292850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75188B70000000001030307) Oct 5 05:12:44 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 05:12:44 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 05:12:44 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 05:12:44 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 05:12:44 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 05:12:44 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:44 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:44 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:44 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 05:12:44 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 05:12:44 localhost sshd[120123]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:12:44 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 05:12:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:12:44 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 05:12:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:12:45 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 05:12:45 localhost systemd[1]: run-r1d8846a4b0144ffd9e4dd7b492653a71.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: run-r3638df6010b8462f9121769a1cc6e7dc.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 05:12:45 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 05:12:45 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 05:12:45 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 05:12:45 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 05:12:45 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:45 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:45 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:12:45 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 05:12:45 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 05:12:45 localhost sshd[120393]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:12:45 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 05:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15079 DF PROTO=TCP SPT=42894 DPT=9105 SEQ=547020558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7519EAA0000000001030307) Oct 5 05:12:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15080 DF PROTO=TCP SPT=42894 DPT=9105 SEQ=547020558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751A2B60000000001030307) Oct 5 05:12:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15081 DF PROTO=TCP SPT=42894 DPT=9105 SEQ=547020558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751AAB60000000001030307) Oct 5 05:12:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15082 DF PROTO=TCP SPT=42894 DPT=9105 SEQ=547020558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751BA760000000001030307) Oct 5 05:12:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6298 DF PROTO=TCP SPT=43216 DPT=9882 SEQ=1857393807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751C2EC0000000001030307) Oct 5 05:12:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41452 DF PROTO=TCP SPT=41174 DPT=9100 SEQ=1916042953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751CAF70000000001030307) Oct 5 05:13:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5027 DF PROTO=TCP SPT=54948 DPT=9102 SEQ=2649198388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751DA370000000001030307) Oct 5 05:13:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36022 DF PROTO=TCP SPT=35664 DPT=9101 SEQ=2763649047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751E2330000000001030307) Oct 5 05:13:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36024 DF PROTO=TCP SPT=35664 DPT=9101 SEQ=2763649047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751EE370000000001030307) Oct 5 05:13:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36025 DF PROTO=TCP SPT=35664 DPT=9101 SEQ=2763649047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC751FDF60000000001030307) Oct 5 05:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33956 DF PROTO=TCP SPT=56558 DPT=9105 SEQ=3640978860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75213DA0000000001030307) Oct 5 05:13:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33957 DF PROTO=TCP SPT=56558 DPT=9105 SEQ=3640978860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75217F60000000001030307) Oct 5 05:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33958 DF PROTO=TCP SPT=56558 DPT=9105 SEQ=3640978860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7521FF70000000001030307) Oct 5 05:13:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33959 DF PROTO=TCP SPT=56558 DPT=9105 SEQ=3640978860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7522FB60000000001030307) Oct 5 05:13:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35027 DF PROTO=TCP SPT=33270 DPT=9882 SEQ=2272643780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752381C0000000001030307) Oct 5 05:13:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30274 DF PROTO=TCP SPT=53424 DPT=9100 SEQ=3221013845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75240360000000001030307) Oct 5 05:13:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47019 DF PROTO=TCP SPT=49152 DPT=9102 SEQ=320808516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7524F760000000001030307) Oct 5 05:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40890 DF PROTO=TCP SPT=34728 DPT=9101 SEQ=1989771159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75257630000000001030307) Oct 5 05:13:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40892 DF PROTO=TCP SPT=34728 DPT=9101 SEQ=1989771159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75263760000000001030307) Oct 5 05:13:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40893 DF PROTO=TCP SPT=34728 DPT=9101 SEQ=1989771159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75273360000000001030307) Oct 5 05:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14993 DF PROTO=TCP SPT=57528 DPT=9105 SEQ=427919012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752890A0000000001030307) Oct 5 05:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14994 DF PROTO=TCP SPT=57528 DPT=9105 SEQ=427919012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7528CF70000000001030307) Oct 5 05:13:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14995 DF PROTO=TCP SPT=57528 DPT=9105 SEQ=427919012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75294F60000000001030307) Oct 5 05:13:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14996 DF PROTO=TCP SPT=57528 DPT=9105 SEQ=427919012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752A4B60000000001030307) Oct 5 05:13:55 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=16 res=1 Oct 5 05:13:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9321 DF PROTO=TCP SPT=40876 DPT=9882 SEQ=412621447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752AD4C0000000001030307) Oct 5 05:13:56 localhost kernel: SELinux: Converting 2740 SID table entries... Oct 5 05:13:56 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:13:56 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:13:56 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:13:57 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=17 res=1 Oct 5 05:13:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49644 DF PROTO=TCP SPT=40254 DPT=9100 SEQ=2462626751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752B5760000000001030307) Oct 5 05:13:58 localhost python3.9[121112]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:13:58 localhost python3.9[121204]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:13:59 localhost python3.9[121277]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655638.3759189-402-258003480618665/.source.fact _original_basename=.sjmcw68k follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:14:00 localhost python3.9[121367]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:14:01 localhost python3.9[121465]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:14:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10179 DF PROTO=TCP SPT=52310 DPT=9102 SEQ=2776746292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752C4760000000001030307) Oct 5 05:14:02 localhost python3.9[121519]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8120 DF PROTO=TCP SPT=50756 DPT=9101 SEQ=4127270885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752CC930000000001030307) Oct 5 05:14:05 localhost systemd[1]: Reloading. Oct 5 05:14:05 localhost systemd-rc-local-generator[121552]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:14:06 localhost systemd-sysv-generator[121555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:14:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:14:06 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 05:14:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8122 DF PROTO=TCP SPT=50756 DPT=9101 SEQ=4127270885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752D8B60000000001030307) Oct 5 05:14:07 localhost python3.9[121658]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:09 localhost python3.9[121897]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Oct 5 05:14:10 localhost python3.9[121989]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Oct 5 05:14:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8123 DF PROTO=TCP SPT=50756 DPT=9101 SEQ=4127270885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752E8760000000001030307) Oct 5 05:14:11 localhost python3.9[122082]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:14:12 localhost python3.9[122174]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Oct 5 05:14:14 localhost python3.9[122266]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:14 localhost python3.9[122358]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:14:15 localhost python3.9[122431]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655654.3369377-726-103175158210336/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43352 DF PROTO=TCP SPT=36016 DPT=9105 SEQ=3870955758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC752FE3A0000000001030307) Oct 5 05:14:16 localhost python3.9[122523]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Oct 5 05:14:17 localhost python3.9[122616]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Oct 5 05:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43353 DF PROTO=TCP SPT=36016 DPT=9105 SEQ=3870955758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75302360000000001030307) Oct 5 05:14:18 localhost python3.9[122709]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:14:19 localhost python3.9[122807]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Oct 5 05:14:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43354 DF PROTO=TCP SPT=36016 DPT=9105 SEQ=3870955758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7530A360000000001030307) Oct 5 05:14:20 localhost python3.9[122899]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:14:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43355 DF PROTO=TCP SPT=36016 DPT=9105 SEQ=3870955758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75319F60000000001030307) Oct 5 05:14:23 localhost python3.9[122993]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:24 localhost python3.9[123085]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:14:25 localhost python3.9[123158]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655664.1320095-969-224499603173591/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52605 DF PROTO=TCP SPT=34914 DPT=9100 SEQ=1053242587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75322770000000001030307) Oct 5 05:14:26 localhost python3.9[123250]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:14:26 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 05:14:26 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 05:14:26 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 05:14:26 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 05:14:26 localhost systemd-modules-load[123254]: Module 'msr' is built in Oct 5 05:14:26 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 05:14:27 localhost python3.9[123347]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:14:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52606 DF PROTO=TCP SPT=34914 DPT=9100 SEQ=1053242587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7532A760000000001030307) Oct 5 05:14:29 localhost python3.9[123420]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655667.423754-1038-147637750125913/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:14:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30008 DF PROTO=TCP SPT=34150 DPT=9102 SEQ=2012604247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75339B60000000001030307) Oct 5 05:14:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1792 DF PROTO=TCP SPT=54984 DPT=9101 SEQ=107035863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75341C20000000001030307) Oct 5 05:14:34 localhost python3.9[123512]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:14:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1794 DF PROTO=TCP SPT=54984 DPT=9101 SEQ=107035863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7534DB70000000001030307) Oct 5 05:14:39 localhost python3.9[123604]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:14:39 localhost python3.9[123696]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Oct 5 05:14:40 localhost python3.9[123786]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:14:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1795 DF PROTO=TCP SPT=54984 DPT=9101 SEQ=107035863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7535D760000000001030307) Oct 5 05:14:41 localhost python3.9[123878]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:14:41 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Oct 5 05:14:41 localhost systemd[1]: tuned.service: Deactivated successfully. Oct 5 05:14:41 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Oct 5 05:14:41 localhost systemd[1]: tuned.service: Consumed 1.780s CPU time, no IO. Oct 5 05:14:41 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Oct 5 05:14:42 localhost systemd[1]: Started Dynamic System Tuning Daemon. Oct 5 05:14:43 localhost python3.9[123980]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Oct 5 05:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5738 DF PROTO=TCP SPT=57322 DPT=9105 SEQ=1171247235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753736A0000000001030307) Oct 5 05:14:47 localhost python3.9[124072]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:14:47 localhost systemd[1]: Reloading. Oct 5 05:14:47 localhost systemd-sysv-generator[124099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:14:47 localhost systemd-rc-local-generator[124096]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:14:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:14:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5739 DF PROTO=TCP SPT=57322 DPT=9105 SEQ=1171247235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75377770000000001030307) Oct 5 05:14:48 localhost python3.9[124202]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:14:48 localhost systemd[1]: Reloading. Oct 5 05:14:48 localhost systemd-rc-local-generator[124229]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:14:48 localhost systemd-sysv-generator[124234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:14:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:14:49 localhost python3.9[124332]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5740 DF PROTO=TCP SPT=57322 DPT=9105 SEQ=1171247235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7537F760000000001030307) Oct 5 05:14:50 localhost python3.9[124425]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:50 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Oct 5 05:14:51 localhost python3.9[124518]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:52 localhost python3.9[124617]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:14:53 localhost python3.9[124710]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:14:53 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 5 05:14:53 localhost systemd[1]: Stopped Apply Kernel Variables. Oct 5 05:14:53 localhost systemd[1]: Stopping Apply Kernel Variables... Oct 5 05:14:53 localhost systemd[1]: Starting Apply Kernel Variables... Oct 5 05:14:53 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 5 05:14:53 localhost systemd[1]: Finished Apply Kernel Variables. Oct 5 05:14:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5741 DF PROTO=TCP SPT=57322 DPT=9105 SEQ=1171247235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7538F360000000001030307) Oct 5 05:14:54 localhost systemd[1]: session-39.scope: Deactivated successfully. Oct 5 05:14:54 localhost systemd[1]: session-39.scope: Consumed 1min 54.712s CPU time. Oct 5 05:14:54 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Oct 5 05:14:54 localhost systemd-logind[760]: Removed session 39. Oct 5 05:14:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61330 DF PROTO=TCP SPT=44446 DPT=9882 SEQ=1204090000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75397AC0000000001030307) Oct 5 05:14:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=522 DF PROTO=TCP SPT=41454 DPT=9100 SEQ=266032665 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7539FB60000000001030307) Oct 5 05:14:59 localhost sshd[124843]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:14:59 localhost systemd-logind[760]: New session 40 of user zuul. Oct 5 05:14:59 localhost systemd[1]: Started Session 40 of User zuul. Oct 5 05:15:00 localhost python3.9[124936]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:15:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:15:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48981 DF PROTO=TCP SPT=47520 DPT=9102 SEQ=1675345151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753AEF60000000001030307) Oct 5 05:15:02 localhost python3.9[125045]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:03 localhost python3.9[125141]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:15:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31979 DF PROTO=TCP SPT=54706 DPT=9101 SEQ=3655169318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753B6F30000000001030307) Oct 5 05:15:04 localhost python3.9[125232]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:15:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5661 writes, 24K keys, 5661 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5661 writes, 723 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:15:05 localhost python3.9[125328]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:15:06 localhost python3.9[125382]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:15:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31981 DF PROTO=TCP SPT=54706 DPT=9101 SEQ=3655169318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753C2F60000000001030307) Oct 5 05:15:10 localhost python3.9[125476]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:15:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31982 DF PROTO=TCP SPT=54706 DPT=9101 SEQ=3655169318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753D2B60000000001030307) Oct 5 05:15:11 localhost python3.9[125623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:12 localhost python3.9[125715]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:15:13 localhost python3.9[125817]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:15:13 localhost python3.9[125865]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:14 localhost python3.9[125957]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:15:15 localhost python3.9[126030]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655713.784756-326-5847641947310/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:15 localhost python3.9[126122]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:16 localhost python3.9[126214]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34551 DF PROTO=TCP SPT=42794 DPT=9105 SEQ=1431846929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753E89A0000000001030307) Oct 5 05:15:17 localhost python3.9[126306]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:17 localhost python3.9[126398]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34552 DF PROTO=TCP SPT=42794 DPT=9105 SEQ=1431846929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753ECB60000000001030307) Oct 5 05:15:18 localhost python3.9[126488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:15:19 localhost python3.9[126582]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34553 DF PROTO=TCP SPT=42794 DPT=9105 SEQ=1431846929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC753F4B70000000001030307) Oct 5 05:15:23 localhost python3.9[126676]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34554 DF PROTO=TCP SPT=42794 DPT=9105 SEQ=1431846929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75404760000000001030307) Oct 5 05:15:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7218 DF PROTO=TCP SPT=58994 DPT=9882 SEQ=3599158788 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7540CDC0000000001030307) Oct 5 05:15:27 localhost python3.9[126770]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37861 DF PROTO=TCP SPT=40676 DPT=9100 SEQ=2450263722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75414F60000000001030307) Oct 5 05:15:31 localhost python3.9[126870]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52576 DF PROTO=TCP SPT=45790 DPT=9102 SEQ=2120133004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75424360000000001030307) Oct 5 05:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17931 DF PROTO=TCP SPT=46080 DPT=9101 SEQ=2920033962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7542C230000000001030307) Oct 5 05:15:35 localhost python3.9[126964]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17933 DF PROTO=TCP SPT=46080 DPT=9101 SEQ=2920033962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75438360000000001030307) Oct 5 05:15:39 localhost python3.9[127058]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17934 DF PROTO=TCP SPT=46080 DPT=9101 SEQ=2920033962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75447F60000000001030307) Oct 5 05:15:43 localhost python3.9[127152]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25629 DF PROTO=TCP SPT=34424 DPT=9105 SEQ=2361191707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7545DCA0000000001030307) Oct 5 05:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25630 DF PROTO=TCP SPT=34424 DPT=9105 SEQ=2361191707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75461B60000000001030307) Oct 5 05:15:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25631 DF PROTO=TCP SPT=34424 DPT=9105 SEQ=2361191707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75469B60000000001030307) Oct 5 05:15:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25632 DF PROTO=TCP SPT=34424 DPT=9105 SEQ=2361191707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75479760000000001030307) Oct 5 05:15:54 localhost python3.9[127322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:55 localhost python3.9[127427]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:15:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63950 DF PROTO=TCP SPT=50050 DPT=9882 SEQ=3076573801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754820C0000000001030307) Oct 5 05:15:56 localhost python3.9[127500]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759655755.0444143-721-118211803290572/.source.json _original_basename=.qvlwq_7o follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:15:57 localhost python3.9[127592]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:15:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59207 DF PROTO=TCP SPT=46658 DPT=9100 SEQ=3694838248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7548A370000000001030307) Oct 5 05:15:58 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 77.2 (257 of 333 items), suggesting rotation. Oct 5 05:15:58 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:15:58 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:15:58 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:16:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56771 DF PROTO=TCP SPT=34510 DPT=9102 SEQ=398146684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75499360000000001030307) Oct 5 05:16:03 localhost podman[127605]: 2025-10-05 09:15:57.444529748 +0000 UTC m=+0.044675154 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 5 05:16:03 localhost podman[127805]: Oct 5 05:16:03 localhost podman[127805]: 2025-10-05 09:16:03.38439747 +0000 UTC m=+0.077590486 container create 0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_ardinghelli, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.expose-services=, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, release=553, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:16:03 localhost systemd[1]: Started libpod-conmon-0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1.scope. Oct 5 05:16:03 localhost systemd[1]: Started libcrun container. Oct 5 05:16:03 localhost podman[127805]: 2025-10-05 09:16:03.352399474 +0000 UTC m=+0.045592530 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:16:03 localhost podman[127805]: 2025-10-05 09:16:03.453114431 +0000 UTC m=+0.146307447 container init 0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_ardinghelli, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Oct 5 05:16:03 localhost podman[127805]: 2025-10-05 09:16:03.462037016 +0000 UTC m=+0.155230022 container start 0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_ardinghelli, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , name=rhceph) Oct 5 05:16:03 localhost podman[127805]: 2025-10-05 09:16:03.464273627 +0000 UTC m=+0.157466643 container attach 0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_ardinghelli, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git) Oct 5 05:16:03 localhost condescending_ardinghelli[127837]: 167 167 Oct 5 05:16:03 localhost systemd[1]: libpod-0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1.scope: Deactivated successfully. Oct 5 05:16:03 localhost podman[127805]: 2025-10-05 09:16:03.467800764 +0000 UTC m=+0.160993800 container died 0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_ardinghelli, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public) Oct 5 05:16:03 localhost podman[127848]: 2025-10-05 09:16:03.560443711 +0000 UTC m=+0.083160519 container remove 0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_ardinghelli, distribution-scope=public, name=rhceph, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Oct 5 05:16:03 localhost systemd[1]: libpod-conmon-0fc2a369e6e6a50a4122531c0dade0d8a41ef058cc613bfe159b9a6ea595b0a1.scope: Deactivated successfully. Oct 5 05:16:03 localhost podman[127894]: Oct 5 05:16:03 localhost podman[127894]: 2025-10-05 09:16:03.776490416 +0000 UTC m=+0.073770700 container create 55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_keldysh, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., ceph=True, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:16:03 localhost systemd[1]: Started libpod-conmon-55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a.scope. Oct 5 05:16:03 localhost systemd[1]: Started libcrun container. Oct 5 05:16:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b94bcb3d66e520cd291ea7d5fdf0f695975e1ec5708b4ec3525858bdf735e44/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 05:16:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b94bcb3d66e520cd291ea7d5fdf0f695975e1ec5708b4ec3525858bdf735e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:16:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b94bcb3d66e520cd291ea7d5fdf0f695975e1ec5708b4ec3525858bdf735e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:16:03 localhost podman[127894]: 2025-10-05 09:16:03.846085442 +0000 UTC m=+0.143365736 container init 55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_keldysh, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, distribution-scope=public, release=553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:16:03 localhost podman[127894]: 2025-10-05 09:16:03.747470192 +0000 UTC m=+0.044750486 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:16:03 localhost podman[127894]: 2025-10-05 09:16:03.854598395 +0000 UTC m=+0.151878679 container start 55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_keldysh, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, RELEASE=main, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:16:03 localhost podman[127894]: 2025-10-05 09:16:03.854881453 +0000 UTC m=+0.152161817 container attach 55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_keldysh, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux ) Oct 5 05:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=808 DF PROTO=TCP SPT=44718 DPT=9101 SEQ=2511933934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754A1520000000001030307) Oct 5 05:16:04 localhost systemd[1]: var-lib-containers-storage-overlay-7f65591ef1f7f63fddb83fce6c646771492d9fd5c706138635b403b53cd1e599-merged.mount: Deactivated successfully. Oct 5 05:16:04 localhost python3.9[128039]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:04 localhost busy_keldysh[127910]: [ Oct 5 05:16:04 localhost busy_keldysh[127910]: { Oct 5 05:16:04 localhost busy_keldysh[127910]: "available": false, Oct 5 05:16:04 localhost busy_keldysh[127910]: "ceph_device": false, Oct 5 05:16:04 localhost busy_keldysh[127910]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 05:16:04 localhost busy_keldysh[127910]: "lsm_data": {}, Oct 5 05:16:04 localhost busy_keldysh[127910]: "lvs": [], Oct 5 05:16:04 localhost busy_keldysh[127910]: "path": "/dev/sr0", Oct 5 05:16:04 localhost busy_keldysh[127910]: "rejected_reasons": [ Oct 5 05:16:04 localhost busy_keldysh[127910]: "Has a FileSystem", Oct 5 05:16:04 localhost busy_keldysh[127910]: "Insufficient space (<5GB)" Oct 5 05:16:04 localhost busy_keldysh[127910]: ], Oct 5 05:16:04 localhost busy_keldysh[127910]: "sys_api": { Oct 5 05:16:04 localhost busy_keldysh[127910]: "actuators": null, Oct 5 05:16:04 localhost busy_keldysh[127910]: "device_nodes": "sr0", Oct 5 05:16:04 localhost busy_keldysh[127910]: "human_readable_size": "482.00 KB", Oct 5 05:16:04 localhost busy_keldysh[127910]: "id_bus": "ata", Oct 5 05:16:04 localhost busy_keldysh[127910]: "model": "QEMU DVD-ROM", Oct 5 05:16:04 localhost busy_keldysh[127910]: "nr_requests": "2", Oct 5 05:16:04 localhost busy_keldysh[127910]: "partitions": {}, Oct 5 05:16:04 localhost busy_keldysh[127910]: "path": "/dev/sr0", Oct 5 05:16:04 localhost busy_keldysh[127910]: "removable": "1", Oct 5 05:16:04 localhost busy_keldysh[127910]: "rev": "2.5+", Oct 5 05:16:04 localhost busy_keldysh[127910]: "ro": "0", Oct 5 05:16:04 localhost busy_keldysh[127910]: "rotational": "1", Oct 5 05:16:04 localhost busy_keldysh[127910]: "sas_address": "", Oct 5 05:16:04 localhost busy_keldysh[127910]: "sas_device_handle": "", Oct 5 05:16:04 localhost busy_keldysh[127910]: "scheduler_mode": "mq-deadline", Oct 5 05:16:04 localhost busy_keldysh[127910]: "sectors": 0, Oct 5 05:16:04 localhost busy_keldysh[127910]: "sectorsize": "2048", Oct 5 05:16:04 localhost busy_keldysh[127910]: "size": 493568.0, Oct 5 05:16:04 localhost busy_keldysh[127910]: "support_discard": "0", Oct 5 05:16:04 localhost busy_keldysh[127910]: "type": "disk", Oct 5 05:16:04 localhost busy_keldysh[127910]: "vendor": "QEMU" Oct 5 05:16:04 localhost busy_keldysh[127910]: } Oct 5 05:16:04 localhost busy_keldysh[127910]: } Oct 5 05:16:04 localhost busy_keldysh[127910]: ] Oct 5 05:16:04 localhost systemd[1]: libpod-55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a.scope: Deactivated successfully. Oct 5 05:16:04 localhost podman[127894]: 2025-10-05 09:16:04.731254281 +0000 UTC m=+1.028534535 container died 55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_keldysh, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, io.buildah.version=1.33.12, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, RELEASE=main, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Oct 5 05:16:04 localhost systemd[1]: var-lib-containers-storage-overlay-4b94bcb3d66e520cd291ea7d5fdf0f695975e1ec5708b4ec3525858bdf735e44-merged.mount: Deactivated successfully. Oct 5 05:16:04 localhost podman[129439]: 2025-10-05 09:16:04.835865425 +0000 UTC m=+0.085248835 container remove 55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_keldysh, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, ceph=True, release=553, name=rhceph) Oct 5 05:16:04 localhost systemd[1]: libpod-conmon-55dea4791086e919e2bc3cfc87c59298bfd1000d0a9a29d66db61f3af22bdf1a.scope: Deactivated successfully. Oct 5 05:16:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=810 DF PROTO=TCP SPT=44718 DPT=9101 SEQ=2511933934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754AD770000000001030307) Oct 5 05:16:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=811 DF PROTO=TCP SPT=44718 DPT=9101 SEQ=2511933934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754BD360000000001030307) Oct 5 05:16:12 localhost podman[129266]: 2025-10-05 09:16:04.703373088 +0000 UTC m=+0.028981986 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:16:13 localhost python3.9[129660]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:15 localhost podman[129672]: 2025-10-05 09:16:13.716889827 +0000 UTC m=+0.041818756 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 5 05:16:16 localhost python3.9[129836]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49363 DF PROTO=TCP SPT=52202 DPT=9105 SEQ=761714303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754D2FA0000000001030307) Oct 5 05:16:17 localhost podman[129848]: 2025-10-05 09:16:16.801224456 +0000 UTC m=+0.046383330 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49364 DF PROTO=TCP SPT=52202 DPT=9105 SEQ=761714303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754D6F70000000001030307) Oct 5 05:16:18 localhost python3.9[130012]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49365 DF PROTO=TCP SPT=52202 DPT=9105 SEQ=761714303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754DEF60000000001030307) Oct 5 05:16:22 localhost podman[130024]: 2025-10-05 09:16:19.053022598 +0000 UTC m=+0.046479913 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Oct 5 05:16:23 localhost python3.9[130198]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Oct 5 05:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49366 DF PROTO=TCP SPT=52202 DPT=9105 SEQ=761714303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754EEB60000000001030307) Oct 5 05:16:24 localhost podman[130211]: 2025-10-05 09:16:23.287525373 +0000 UTC m=+0.044475089 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Oct 5 05:16:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29197 DF PROTO=TCP SPT=58412 DPT=9100 SEQ=908492280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754F7360000000001030307) Oct 5 05:16:26 localhost systemd[1]: session-40.scope: Deactivated successfully. Oct 5 05:16:26 localhost systemd[1]: session-40.scope: Consumed 1min 27.792s CPU time. Oct 5 05:16:26 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Oct 5 05:16:26 localhost systemd-logind[760]: Removed session 40. Oct 5 05:16:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29198 DF PROTO=TCP SPT=58412 DPT=9100 SEQ=908492280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC754FF370000000001030307) Oct 5 05:16:31 localhost sshd[130322]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:16:31 localhost systemd-logind[760]: New session 41 of user zuul. Oct 5 05:16:31 localhost systemd[1]: Started Session 41 of User zuul. Oct 5 05:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19768 DF PROTO=TCP SPT=57936 DPT=9102 SEQ=4076149967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7550E760000000001030307) Oct 5 05:16:32 localhost python3.9[130415]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33848 DF PROTO=TCP SPT=56930 DPT=9101 SEQ=4240760772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75516830000000001030307) Oct 5 05:16:35 localhost python3.9[130766]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Oct 5 05:16:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33850 DF PROTO=TCP SPT=56930 DPT=9101 SEQ=4240760772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75522770000000001030307) Oct 5 05:16:38 localhost python3.9[130859]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:16:39 localhost python3.9[131168]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:16:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33851 DF PROTO=TCP SPT=56930 DPT=9101 SEQ=4240760772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75532360000000001030307) Oct 5 05:16:43 localhost python3.9[131262]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17663 DF PROTO=TCP SPT=53968 DPT=9105 SEQ=2392189127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755482A0000000001030307) Oct 5 05:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17664 DF PROTO=TCP SPT=53968 DPT=9105 SEQ=2392189127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7554C360000000001030307) Oct 5 05:16:48 localhost python3.9[131514]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:16:49 localhost python3.9[131607]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17665 DF PROTO=TCP SPT=53968 DPT=9105 SEQ=2392189127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75554360000000001030307) Oct 5 05:16:50 localhost python3.9[131699]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Oct 5 05:16:52 localhost kernel: SELinux: Converting 2742 SID table entries... Oct 5 05:16:52 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:16:52 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:16:52 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:16:53 localhost python3.9[132158]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:16:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17666 DF PROTO=TCP SPT=53968 DPT=9105 SEQ=2392189127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75563F70000000001030307) Oct 5 05:16:54 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=18 res=1 Oct 5 05:16:54 localhost python3.9[132256]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:16:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7977 DF PROTO=TCP SPT=40112 DPT=9882 SEQ=2885671345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7556C6C0000000001030307) Oct 5 05:16:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13591 DF PROTO=TCP SPT=53740 DPT=9100 SEQ=2290875884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75574760000000001030307) Oct 5 05:16:58 localhost python3.9[132350]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:17:00 localhost python3.9[132595]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:17:01 localhost python3.9[132685]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:17:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36564 DF PROTO=TCP SPT=47406 DPT=9102 SEQ=414810703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75583B70000000001030307) Oct 5 05:17:02 localhost python3.9[132779]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65289 DF PROTO=TCP SPT=36164 DPT=9101 SEQ=2910100059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7558BB20000000001030307) Oct 5 05:17:05 localhost python3.9[132873]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:17:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65291 DF PROTO=TCP SPT=36164 DPT=9101 SEQ=2910100059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75597B70000000001030307) Oct 5 05:17:09 localhost python3.9[133044]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 05:17:09 localhost systemd[1]: Reloading. Oct 5 05:17:09 localhost systemd-rc-local-generator[133070]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:17:09 localhost systemd-sysv-generator[133076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:17:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:17:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65292 DF PROTO=TCP SPT=36164 DPT=9101 SEQ=2910100059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755A7760000000001030307) Oct 5 05:17:12 localhost python3.9[133176]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:17:13 localhost python3.9[133268]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:13 localhost python3.9[133362]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:14 localhost python3.9[133454]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:15 localhost python3.9[133546]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:15 localhost python3.9[133619]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655834.7996697-566-56406505945445/.source _original_basename=.dw8cezd_ follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:16 localhost python3.9[133711]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36820 DF PROTO=TCP SPT=58348 DPT=9105 SEQ=3393921348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755BD5A0000000001030307) Oct 5 05:17:17 localhost python3.9[133803]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Oct 5 05:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36821 DF PROTO=TCP SPT=58348 DPT=9105 SEQ=3393921348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755C1760000000001030307) Oct 5 05:17:18 localhost python3.9[133895]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:19 localhost python3.9[133987]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:19 localhost python3.9[134060]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655838.6064177-691-149978827676282/.source.yaml _original_basename=.fkna4rlv follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36822 DF PROTO=TCP SPT=58348 DPT=9105 SEQ=3393921348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755C9760000000001030307) Oct 5 05:17:20 localhost python3.9[134152]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Oct 5 05:17:21 localhost ansible-async_wrapper.py[134257]: Invoked with j197521121034 300 /home/zuul/.ansible/tmp/ansible-tmp-1759655841.0094624-763-178135096570544/AnsiballZ_edpm_os_net_config.py _ Oct 5 05:17:21 localhost ansible-async_wrapper.py[134260]: Starting module and watcher Oct 5 05:17:21 localhost ansible-async_wrapper.py[134260]: Start watching 134261 (300) Oct 5 05:17:21 localhost ansible-async_wrapper.py[134261]: Start module (134261) Oct 5 05:17:21 localhost ansible-async_wrapper.py[134257]: Return async_wrapper task started. Oct 5 05:17:22 localhost python3.9[134262]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Oct 5 05:17:22 localhost ansible-async_wrapper.py[134261]: Module complete (134261) Oct 5 05:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36823 DF PROTO=TCP SPT=58348 DPT=9105 SEQ=3393921348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755D9360000000001030307) Oct 5 05:17:25 localhost python3.9[134354]: ansible-ansible.legacy.async_status Invoked with jid=j197521121034.134257 mode=status _async_dir=/root/.ansible_async Oct 5 05:17:26 localhost python3.9[134413]: ansible-ansible.legacy.async_status Invoked with jid=j197521121034.134257 mode=cleanup _async_dir=/root/.ansible_async Oct 5 05:17:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47977 DF PROTO=TCP SPT=45842 DPT=9882 SEQ=805713309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755E19C0000000001030307) Oct 5 05:17:26 localhost python3.9[134505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:26 localhost ansible-async_wrapper.py[134260]: Done in kid B. Oct 5 05:17:27 localhost python3.9[134578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655846.28688-829-92726294874281/.source.returncode _original_basename=._w8ixt0n follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:27 localhost python3.9[134670]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35141 DF PROTO=TCP SPT=60284 DPT=9100 SEQ=1962085421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755E9B70000000001030307) Oct 5 05:17:28 localhost python3.9[134743]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655847.5345986-877-81683092406009/.source.cfg _original_basename=.4gowq1bq follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:29 localhost python3.9[134835]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:17:29 localhost systemd[1]: Reloading Network Manager... Oct 5 05:17:29 localhost NetworkManager[5970]: [1759655849.4121] audit: op="reload" arg="0" pid=134839 uid=0 result="success" Oct 5 05:17:29 localhost NetworkManager[5970]: [1759655849.4129] config: signal: SIGHUP (no changes from disk) Oct 5 05:17:29 localhost systemd[1]: Reloaded Network Manager. Oct 5 05:17:30 localhost systemd[1]: session-41.scope: Deactivated successfully. Oct 5 05:17:30 localhost systemd[1]: session-41.scope: Consumed 34.935s CPU time. Oct 5 05:17:30 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Oct 5 05:17:30 localhost systemd-logind[760]: Removed session 41. Oct 5 05:17:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38185 DF PROTO=TCP SPT=39186 DPT=9102 SEQ=2650614058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC755F8F70000000001030307) Oct 5 05:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63335 DF PROTO=TCP SPT=60652 DPT=9101 SEQ=571972993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75600E20000000001030307) Oct 5 05:17:36 localhost sshd[134854]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:17:36 localhost systemd-logind[760]: New session 42 of user zuul. Oct 5 05:17:36 localhost systemd[1]: Started Session 42 of User zuul. Oct 5 05:17:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63337 DF PROTO=TCP SPT=60652 DPT=9101 SEQ=571972993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7560CF60000000001030307) Oct 5 05:17:37 localhost python3.9[134947]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:17:38 localhost python3.9[135041]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:17:40 localhost python3.9[135186]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:17:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63338 DF PROTO=TCP SPT=60652 DPT=9101 SEQ=571972993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7561CB60000000001030307) Oct 5 05:17:41 localhost systemd[1]: session-42.scope: Deactivated successfully. Oct 5 05:17:41 localhost systemd[1]: session-42.scope: Consumed 2.138s CPU time. Oct 5 05:17:41 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Oct 5 05:17:41 localhost systemd-logind[760]: Removed session 42. Oct 5 05:17:46 localhost sshd[135202]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:17:46 localhost systemd-logind[760]: New session 43 of user zuul. Oct 5 05:17:46 localhost systemd[1]: Started Session 43 of User zuul. Oct 5 05:17:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43821 DF PROTO=TCP SPT=40026 DPT=9105 SEQ=4272409275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756328A0000000001030307) Oct 5 05:17:47 localhost python3.9[135295]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43822 DF PROTO=TCP SPT=40026 DPT=9105 SEQ=4272409275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75636760000000001030307) Oct 5 05:17:48 localhost sshd[135346]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:17:48 localhost python3.9[135391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:17:49 localhost python3.9[135487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43823 DF PROTO=TCP SPT=40026 DPT=9105 SEQ=4272409275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7563E770000000001030307) Oct 5 05:17:50 localhost python3.9[135541]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43824 DF PROTO=TCP SPT=40026 DPT=9105 SEQ=4272409275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7564E360000000001030307) Oct 5 05:17:54 localhost python3.9[135635]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:17:55 localhost python3.9[135782]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61195 DF PROTO=TCP SPT=41318 DPT=9882 SEQ=2822971810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75656CC0000000001030307) Oct 5 05:17:56 localhost python3.9[135874]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:17:57 localhost python3.9[135978]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:57 localhost python3.9[136026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:17:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22353 DF PROTO=TCP SPT=41696 DPT=9100 SEQ=3296841525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7565EF70000000001030307) Oct 5 05:17:58 localhost python3.9[136118]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:17:58 localhost python3.9[136166]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:17:59 localhost python3.9[136258]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:00 localhost python3.9[136350]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:01 localhost python3.9[136442]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:01 localhost python3.9[136534]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19861 DF PROTO=TCP SPT=55488 DPT=9102 SEQ=682598863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7566DF60000000001030307) Oct 5 05:18:02 localhost python3.9[136626]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40429 DF PROTO=TCP SPT=60916 DPT=9101 SEQ=3704188966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75676130000000001030307) Oct 5 05:18:06 localhost python3.9[136720]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40431 DF PROTO=TCP SPT=60916 DPT=9101 SEQ=3704188966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75682360000000001030307) Oct 5 05:18:07 localhost python3.9[136814]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:18:08 localhost python3.9[136906]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:18:09 localhost python3.9[136998]: ansible-service_facts Invoked Oct 5 05:18:09 localhost network[137015]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:18:09 localhost network[137016]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:18:09 localhost network[137017]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:18:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40432 DF PROTO=TCP SPT=60916 DPT=9101 SEQ=3704188966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75691F60000000001030307) Oct 5 05:18:14 localhost python3.9[137466]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47337 DF PROTO=TCP SPT=36878 DPT=9105 SEQ=1866231748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756A7BA0000000001030307) Oct 5 05:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47338 DF PROTO=TCP SPT=36878 DPT=9105 SEQ=1866231748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756ABB60000000001030307) Oct 5 05:18:19 localhost python3.9[137560]: ansible-package_facts Invoked with manager=['auto'] strategy=first Oct 5 05:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47339 DF PROTO=TCP SPT=36878 DPT=9105 SEQ=1866231748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756B3B60000000001030307) Oct 5 05:18:20 localhost python3.9[137652]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:21 localhost python3.9[137727]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655900.3528168-624-110305206476653/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:22 localhost python3.9[137821]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:23 localhost python3.9[137896]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655901.8957293-668-249954304892216/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47340 DF PROTO=TCP SPT=36878 DPT=9105 SEQ=1866231748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756C3770000000001030307) Oct 5 05:18:24 localhost python3.9[137990]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54145 DF PROTO=TCP SPT=39184 DPT=9100 SEQ=4020833612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756CBF60000000001030307) Oct 5 05:18:26 localhost python3.9[138084]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:18:27 localhost python3.9[138138]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:18:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54146 DF PROTO=TCP SPT=39184 DPT=9100 SEQ=4020833612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756D3F60000000001030307) Oct 5 05:18:29 localhost python3.9[138232]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:18:30 localhost python3.9[138286]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:18:30 localhost chronyd[25796]: chronyd exiting Oct 5 05:18:30 localhost systemd[1]: Stopping NTP client/server... Oct 5 05:18:30 localhost systemd[1]: chronyd.service: Deactivated successfully. Oct 5 05:18:30 localhost systemd[1]: Stopped NTP client/server. Oct 5 05:18:30 localhost systemd[1]: Starting NTP client/server... Oct 5 05:18:30 localhost chronyd[138294]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Oct 5 05:18:30 localhost chronyd[138294]: Frequency -26.971 +/- 0.234 ppm read from /var/lib/chrony/drift Oct 5 05:18:30 localhost chronyd[138294]: Loaded seccomp filter (level 2) Oct 5 05:18:30 localhost systemd[1]: Started NTP client/server. Oct 5 05:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5146 DF PROTO=TCP SPT=52594 DPT=9102 SEQ=138562327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756E3370000000001030307) Oct 5 05:18:32 localhost systemd[1]: session-43.scope: Deactivated successfully. Oct 5 05:18:32 localhost systemd[1]: session-43.scope: Consumed 27.118s CPU time. Oct 5 05:18:32 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Oct 5 05:18:32 localhost systemd-logind[760]: Removed session 43. Oct 5 05:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5147 DF PROTO=TCP SPT=52594 DPT=9102 SEQ=138562327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756EB370000000001030307) Oct 5 05:18:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28665 DF PROTO=TCP SPT=37216 DPT=9101 SEQ=1710431507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC756F7360000000001030307) Oct 5 05:18:38 localhost sshd[138310]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:18:38 localhost systemd-logind[760]: New session 44 of user zuul. Oct 5 05:18:38 localhost systemd[1]: Started Session 44 of User zuul. Oct 5 05:18:39 localhost python3.9[138403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:18:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28666 DF PROTO=TCP SPT=37216 DPT=9101 SEQ=1710431507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75706F70000000001030307) Oct 5 05:18:41 localhost python3.9[138499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:42 localhost python3.9[138604]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:42 localhost python3.9[138652]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.wy4ducza recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:43 localhost python3.9[138744]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:44 localhost python3.9[138819]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655922.9968235-145-143507348749825/.source _original_basename=.puj8fi6e follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:45 localhost python3.9[138911]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:46 localhost python3.9[139003]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24551 DF PROTO=TCP SPT=41670 DPT=9105 SEQ=1968391712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7571CEB0000000001030307) Oct 5 05:18:47 localhost python3.9[139076]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655926.0012708-217-192729949898350/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:47 localhost python3.9[139168]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24552 DF PROTO=TCP SPT=41670 DPT=9105 SEQ=1968391712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75720F60000000001030307) Oct 5 05:18:48 localhost python3.9[139241]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759655927.2122765-217-31128401859937/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:18:48 localhost python3.9[139333]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:49 localhost python3.9[139425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24553 DF PROTO=TCP SPT=41670 DPT=9105 SEQ=1968391712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75728F60000000001030307) Oct 5 05:18:50 localhost auditd[725]: Audit daemon rotating log files Oct 5 05:18:51 localhost python3.9[139498]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655929.1934133-328-88903308767628/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:51 localhost python3.9[139590]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:52 localhost python3.9[139663]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655931.1994767-373-198094134537590/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:53 localhost python3.9[139755]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:18:53 localhost systemd[1]: Reloading. Oct 5 05:18:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24554 DF PROTO=TCP SPT=41670 DPT=9105 SEQ=1968391712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75738B70000000001030307) Oct 5 05:18:53 localhost systemd-rc-local-generator[139780]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:18:53 localhost systemd-sysv-generator[139783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:18:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:54 localhost systemd[1]: Reloading. Oct 5 05:18:54 localhost systemd-rc-local-generator[139816]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:18:54 localhost systemd-sysv-generator[139824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:18:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:54 localhost systemd[1]: Starting EDPM Container Shutdown... Oct 5 05:18:54 localhost systemd[1]: Finished EDPM Container Shutdown. Oct 5 05:18:55 localhost python3.9[139925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:55 localhost python3.9[139998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655934.5825825-443-182487405090311/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1590 DF PROTO=TCP SPT=58770 DPT=9882 SEQ=3267971290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757412C0000000001030307) Oct 5 05:18:56 localhost python3.9[140090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:18:56 localhost python3.9[140163]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759655935.8313053-488-210345189506131/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:18:57 localhost python3.9[140255]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:18:57 localhost systemd[1]: Reloading. Oct 5 05:18:57 localhost systemd-rc-local-generator[140279]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:18:57 localhost systemd-sysv-generator[140284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:18:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:18:57 localhost systemd[1]: Starting Create netns directory... Oct 5 05:18:57 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:18:57 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:18:57 localhost systemd[1]: Finished Create netns directory. Oct 5 05:18:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12214 DF PROTO=TCP SPT=46058 DPT=9100 SEQ=1348266787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75749360000000001030307) Oct 5 05:18:58 localhost python3.9[140388]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:18:58 localhost network[140405]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:18:58 localhost network[140406]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:18:58 localhost network[140407]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:18:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:19:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7046 DF PROTO=TCP SPT=37498 DPT=9102 SEQ=3077260457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75758760000000001030307) Oct 5 05:19:03 localhost python3.9[140609]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42125 DF PROTO=TCP SPT=45572 DPT=9101 SEQ=2111863416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75760720000000001030307) Oct 5 05:19:04 localhost python3.9[140684]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759655943.1817307-611-216051015898962/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:05 localhost python3.9[140775]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42127 DF PROTO=TCP SPT=45572 DPT=9101 SEQ=2111863416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7576C760000000001030307) Oct 5 05:19:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42128 DF PROTO=TCP SPT=45572 DPT=9101 SEQ=2111863416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7577C370000000001030307) Oct 5 05:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39946 DF PROTO=TCP SPT=50382 DPT=9105 SEQ=2559858732 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757921A0000000001030307) Oct 5 05:19:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39947 DF PROTO=TCP SPT=50382 DPT=9105 SEQ=2559858732 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75796360000000001030307) Oct 5 05:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39948 DF PROTO=TCP SPT=50382 DPT=9105 SEQ=2559858732 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7579E370000000001030307) Oct 5 05:19:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39949 DF PROTO=TCP SPT=50382 DPT=9105 SEQ=2559858732 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757ADF60000000001030307) Oct 5 05:19:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23147 DF PROTO=TCP SPT=49208 DPT=9882 SEQ=465660760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757B65C0000000001030307) Oct 5 05:19:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21505 DF PROTO=TCP SPT=46954 DPT=9100 SEQ=1168675204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757BE760000000001030307) Oct 5 05:19:30 localhost systemd[1]: session-44.scope: Deactivated successfully. Oct 5 05:19:30 localhost systemd[1]: session-44.scope: Consumed 14.237s CPU time. Oct 5 05:19:30 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Oct 5 05:19:30 localhost systemd-logind[760]: Removed session 44. Oct 5 05:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34635 DF PROTO=TCP SPT=50706 DPT=9102 SEQ=1860688033 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757CDB70000000001030307) Oct 5 05:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1421 DF PROTO=TCP SPT=49536 DPT=9101 SEQ=730272010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757D5A30000000001030307) Oct 5 05:19:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1423 DF PROTO=TCP SPT=49536 DPT=9101 SEQ=730272010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757E1B60000000001030307) Oct 5 05:19:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1424 DF PROTO=TCP SPT=49536 DPT=9101 SEQ=730272010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC757F1760000000001030307) Oct 5 05:19:43 localhost sshd[140882]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:19:43 localhost systemd-logind[760]: New session 45 of user zuul. Oct 5 05:19:43 localhost systemd[1]: Started Session 45 of User zuul. Oct 5 05:19:44 localhost python3.9[140975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:19:45 localhost python3.9[141071]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:46 localhost python3.9[141176]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17711 DF PROTO=TCP SPT=55898 DPT=9105 SEQ=805503467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC758074A0000000001030307) Oct 5 05:19:47 localhost python3.9[141224]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ivv6gmx7 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17712 DF PROTO=TCP SPT=55898 DPT=9105 SEQ=805503467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7580B370000000001030307) Oct 5 05:19:48 localhost python3.9[141316]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:48 localhost python3.9[141364]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.e6usklee recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:49 localhost python3.9[141456]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:19:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17713 DF PROTO=TCP SPT=55898 DPT=9105 SEQ=805503467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75813360000000001030307) Oct 5 05:19:49 localhost python3.9[141548]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:50 localhost python3.9[141596]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:19:50 localhost python3.9[141688]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:51 localhost python3.9[141736]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:19:52 localhost python3.9[141828]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:52 localhost python3.9[141920]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:53 localhost python3.9[141968]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17714 DF PROTO=TCP SPT=55898 DPT=9105 SEQ=805503467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75822F60000000001030307) Oct 5 05:19:54 localhost python3.9[142060]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:54 localhost python3.9[142108]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:55 localhost python3.9[142200]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:19:55 localhost systemd[1]: Reloading. Oct 5 05:19:55 localhost systemd-rc-local-generator[142225]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:19:55 localhost systemd-sysv-generator[142230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:19:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:19:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51279 DF PROTO=TCP SPT=40544 DPT=9882 SEQ=1835653366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7582B8C0000000001030307) Oct 5 05:19:56 localhost python3.9[142330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:57 localhost python3.9[142378]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:57 localhost python3.9[142470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:19:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34605 DF PROTO=TCP SPT=40770 DPT=9100 SEQ=1497924191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75833B60000000001030307) Oct 5 05:19:58 localhost python3.9[142518]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:19:58 localhost python3.9[142610]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:19:59 localhost systemd[1]: Reloading. Oct 5 05:19:59 localhost systemd-rc-local-generator[142635]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:19:59 localhost systemd-sysv-generator[142638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:19:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:19:59 localhost systemd[1]: Starting Create netns directory... Oct 5 05:19:59 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:19:59 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:19:59 localhost systemd[1]: Finished Create netns directory. Oct 5 05:20:00 localhost python3.9[142742]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:20:00 localhost network[142759]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:20:00 localhost network[142760]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:20:00 localhost network[142761]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:20:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:20:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4365 DF PROTO=TCP SPT=41880 DPT=9102 SEQ=4234030265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75842B60000000001030307) Oct 5 05:20:03 localhost python3.9[142963]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18230 DF PROTO=TCP SPT=37160 DPT=9101 SEQ=837011572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7584AD30000000001030307) Oct 5 05:20:04 localhost python3.9[143011]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:05 localhost python3.9[143103]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:05 localhost python3.9[143195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:06 localhost python3.9[143268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656005.2317634-611-169566045525114/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18232 DF PROTO=TCP SPT=37160 DPT=9101 SEQ=837011572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75856F60000000001030307) Oct 5 05:20:07 localhost python3.9[143360]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Oct 5 05:20:07 localhost systemd[1]: Starting Time & Date Service... Oct 5 05:20:07 localhost systemd[1]: Started Time & Date Service. Oct 5 05:20:08 localhost python3.9[143456]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:08 localhost python3.9[143548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:09 localhost python3.9[143621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656008.4408143-716-277502420840310/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:10 localhost python3.9[143713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18233 DF PROTO=TCP SPT=37160 DPT=9101 SEQ=837011572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75866B60000000001030307) Oct 5 05:20:11 localhost python3.9[143786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656009.7419734-761-47762975915021/.source.yaml _original_basename=.by3w3uks follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:12 localhost python3.9[143878]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:12 localhost python3.9[143953]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656011.6469066-806-190489918153044/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:13 localhost python3.9[144075]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:14 localhost podman[144217]: 2025-10-05 09:20:14.157482776 +0000 UTC m=+0.087233665 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12) Oct 5 05:20:14 localhost podman[144217]: 2025-10-05 09:20:14.251100665 +0000 UTC m=+0.180851544 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:20:14 localhost python3.9[144260]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:15 localhost python3[144449]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:20:15 localhost python3.9[144555]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:16 localhost python3.9[144643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656015.483717-922-222871415997313/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42955 DF PROTO=TCP SPT=50270 DPT=9105 SEQ=2791561605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7587C7A0000000001030307) Oct 5 05:20:17 localhost python3.9[144735]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:17 localhost python3.9[144808]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656016.748069-968-172623281353620/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:18 localhost python3.9[144900]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:19 localhost python3.9[144973]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656018.0277386-1013-128899810828855/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:19 localhost python3.9[145065]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:20 localhost python3.9[145138]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656019.1972053-1057-238136959077511/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:21 localhost python3.9[145230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:22 localhost python3.9[145303]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656021.0176678-1103-120822147860624/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:23 localhost python3.9[145395]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:23 localhost python3.9[145487]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:24 localhost python3.9[145582]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16875 DF PROTO=TCP SPT=48218 DPT=9100 SEQ=535797787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7589CCD0000000001030307) Oct 5 05:20:25 localhost python3.9[145675]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:25 localhost python3.9[145767]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19111 DF PROTO=TCP SPT=55552 DPT=9882 SEQ=1540482370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC758A0BC0000000001030307) Oct 5 05:20:26 localhost python3.9[145859]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Oct 5 05:20:27 localhost python3.9[145952]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Oct 5 05:20:27 localhost systemd[1]: session-45.scope: Deactivated successfully. Oct 5 05:20:27 localhost systemd[1]: session-45.scope: Consumed 27.044s CPU time. Oct 5 05:20:27 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Oct 5 05:20:27 localhost systemd-logind[760]: Removed session 45. Oct 5 05:20:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50057 DF PROTO=TCP SPT=39544 DPT=9102 SEQ=1427808522 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC758B3F50000000001030307) Oct 5 05:20:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8220 DF PROTO=TCP SPT=36378 DPT=9101 SEQ=29980824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC758C0030000000001030307) Oct 5 05:20:35 localhost sshd[145968]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:20:35 localhost systemd-logind[760]: New session 46 of user zuul. Oct 5 05:20:35 localhost systemd[1]: Started Session 46 of User zuul. Oct 5 05:20:36 localhost python3.9[146063]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Oct 5 05:20:37 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Oct 5 05:20:37 localhost python3.9[146157]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:20:39 localhost python3.9[146251]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Oct 5 05:20:39 localhost chronyd[138294]: Selected source 199.182.221.110 (pool.ntp.org) Oct 5 05:20:40 localhost python3.9[146343]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.kv9clk5i follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:20:42 localhost python3.9[146418]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.kv9clk5i mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656040.1303446-191-66063582677344/.source.kv9clk5i _original_basename=.m5esbzaa follow=False checksum=a5b7abc70e8cdf8ce48ea3fad60c0d7fc823809c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:45 localhost python3.9[146510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:20:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8238 DF PROTO=TCP SPT=32914 DPT=9105 SEQ=533127200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC758F1AB0000000001030307) Oct 5 05:20:46 localhost python3.9[146602]: ansible-ansible.builtin.blockinfile Invoked with block=np0005471148.localdomain,192.168.122.105,np0005471148* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCav0eZ81SP1lgxNKp8kzS2MGddVZXD3CnfZarlQErB75DRL4T/NvcVXnfxKn4UPX+h1zwIlKhrD0kHzKTVqifYPUqAmLb8rYREMTmXhQxto2b7VGPMQJtDAprHqyUEFlSdV8NbN3SVctntX/mSKO9bD06JFfa3F62ItPVHy6SnAKMzgNdSszOdKFvbEzC2oxcehr1uB2BAOIiTb1KxyTjXhvXZSYUsBxiGWPOP83oZQxCJlh/VjIUu6P2F6+mv1415n4ujbEujO8/iVbBF1uy28bTobQfABbfPNDNUCd9Gr+xDlT4JuuYTcjqG+gr3yvctzwj/+lxYcJbC0ZYtRhJ0pu8gjm44UFVFCpPxwPpvkKV5n+jU3uaSX98EZpaTlK51qqfwX29LxmMKs3pezfixQ67KCoq1jcDNXUiZpX9svKFD2Drlx+6s9pBkQGZcsmVNiCKQBJmrpFCgYhAPOEIjAGPkic0qp+pAaJtQpB/gYfF/cNCJmCm80s5s/jRuSOs=#012np0005471148.localdomain,192.168.122.105,np0005471148* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAp7Wif6DpMQKTwU3PubEUDmFwUOeZnS+fubLkMUqCdL#012np0005471148.localdomain,192.168.122.105,np0005471148* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+Jh3lVRxMbXFkgqshiJoCO9Ej1k6b9l13ZcaXQzdlR/Wufer1byxTOnOxRYkvLgFnjgmViKWAnlhwFgjslN0E=#012np0005471152.localdomain,192.168.122.108,np0005471152* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQL9bjzo5YAISp2Bxwtb4g1hALXPqelm3WBGwGfh3/tRyDvnxqpgAH4BkgnyM92vRVDUZgylBjfJ54aevQzR0sxDWI5un2tTEepezxrrMvJNDvOss/fCLi88oah/o3qw++j3XWh7zZNBR2ZlXoM/pIxbee1SynEGOX2B0csXrd1qrshg6L4eHx3xP0RwAulzm5seEcMLqx8KH2dq77wY0VqQkpaFyFb7FqX77rxq/UKPpgE0srhO8SRvE9De5pNe/qOciIyF6dgzu5EyyHu7KYjTILbMKxDa32WE/P2Rf7vIscc9uCS7JGMjSz6NeeFnpRpsv8N/pMUGyuUGsD1ZchAk2FVF+E5cZtF04URyBXHR3aMjxItV46eMTahkYu0ieB5XIe1ht+1mpTNW5HuK+c5IGVa1+5Y3udf7NKVNLxbJKJpiyb1+mVhhrwPzJFaIuMT3y2IHiF3xGDIof8BMBzvhUW/T0WYISPRdb3hpP5yODYfEz7Mmnpe6mZj+mFVVc=#012np0005471152.localdomain,192.168.122.108,np0005471152* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP3x5SckWWWGd79jap3Mvs5wH/QoloMzzJMibApRFTOH#012np0005471152.localdomain,192.168.122.108,np0005471152* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2IZNaNg1HrZ8uBp5hH2F2fftZpwxpN/FAZW1FDmDJnG3zQL7JXSnOySV+EzgCTEq8YFKz+6pYQVjbNBVcMyHY=#012np0005471147.localdomain,192.168.122.104,np0005471147* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz7dSoZhAVsu7Q6pQ5T0a3vdxjM8VsWq083YCwmW5ZBuWxtpO+ywiBUZXF2GXQh83uhFPjTL6AVFeIX5lNLPi70M1qL6Twe/O2mk2gSzlx225JQnN98IGNIaiWFoDWJeh+QC5ahKjsZLMqt7JQaJMEu8Y+pNNhDzn+mrA5SQL/4KeoVuUMVnHW606U26xi/2P8WkxBdjPuLtDQdFdmprrS1/lNbxCAMj0MhrqsxbpX9uLe04KqrNXmsaTlvu+XKlf2y7mxaihY81Qbyf86Guw2DS8EIhDZjC2olPxoqJJn5ZAGtvtc/FzkH/pbbMy1CbD6OnTFGsUHbZKS9eBF7PtpLp3YiUp/FyRfiyxmtelUycYx7bqdixnmEGj4O2Ju2ehdpxO1RyBRyrfUelVA8bfBft6yd41RwKwujj5OtnOXzqb7I8O83ZgbDm6oUjTG+59hElsoR3PI5ow3C3NTrDQxwesLfuTjCrjHCWnvKIQb51xqtNRDT8PTStx27/FxOJ0=#012np0005471147.localdomain,192.168.122.104,np0005471147* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK2i2wPoxrCiKAfRIrzXmTAp8OTrj2YwZHMGqK46Nz23#012np0005471147.localdomain,192.168.122.104,np0005471147* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEylXfK9QkjmsDlz9cP3sZHSxfYmmFZ1i6DugCmJUagRpornJXqftjM+iDp79cZs676yn/qZCEtj0wsqsiaQjLA=#012np0005471151.localdomain,192.168.122.107,np0005471151* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeDNxXs+ZUIP9/a2zVFllXGXsP2/RtUXLMLDP4YL71gvVrRf+MpnYrvCNPSMtaio8hFnrpiDFXxbT/vT8cGaq0VtYxjMm6ggMMEpJTsx2xG5zkDW3nbKnfBWdlrf2h3+WUBHOB9mofrB5CT0cuNDshy8Zq3cPyqMZVPdJXPIH+fsWD+b65aHwAk93ThJehxt/nPEDADcRKHLYFTlAyvnZ5aEvqj714SQIjwLcSkgaTfu3JmjF9FllzZz3DKBld7fRbggrz2rkww5yxrvj9W/KsoSugYq1N+fEEWdUonP/PYnRfJ9Qe+OMV5TmEEYuUOqPqaVs8vMZI4zYb3l5asdknHsN0N3URQbZANs9Fettfh3uoOPlyegvPjIMukQ8KZAy+KQWSAzho7RnR5ULuWVNi7Rj9mFC01wy0778Zqb7BlWc+Yn3kNXEkR9u1vQjBq7B+Ie922b6pYARzXmaE2yjzI7QdYo1IB/o9UIP/zEfugki+28qB0215MGXrk3EqTk8=#012np0005471151.localdomain,192.168.122.107,np0005471151* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOPYp4E4CPb8OeaXcuCXzvWlnLbzMphE36OLWOqzbsk9#012np0005471151.localdomain,192.168.122.107,np0005471151* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVed2EDqr9esw80ElZbLpRPK5ioAVBRkpsLKO1S/aN8MVh1BSM2slQbIv+QbUY3Qu3prAQuxkBFoKvxbciSRgQ=#012np0005471146.localdomain,192.168.122.103,np0005471146* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB7OvQtGFS2ddbuT67PLzOZMMKExXKgLGlJbGmtwnZie42R//csfGTuDcY5sTL5gAKr5LgWtvuSJPxC5H8l1UXw+Jr1ot425wmg47AIcheuJNQqzQ7tPAGH3PICnVC6aPHAOVRVF+gH7UOtvdgmSE7iMATMRPcUy2tqR8NCuKKvzDeS/2RQXJpgWok3C9RwXiVS5oUv9jUyevFtgntUOYojmdQgQKC7AwBkYfT7TF3CJZYryU/VVFtwd7a/UiSCw5QLoTN8NxCyROZfFtmylvUybp8RdUroQiriJw1zcQyVLsXbwq0clpb5hc+/3tQLZv3a6JrVpp5DZq+MW98UkErXy11sX4Mk9e2seewM0xMkdGzMReNlZqtUWLIISbhxkBby9gn3WRKG32HdCCSD66ZhNAfOCfpaO3dNiCRUyzYoh4WRF7pu7nwBQ/eTQp8SGptdGGHUf0XF9tqRWjj2nrVrHHOnbj/9clk9VdTU6dbcxFoz3X5SWbovR40rDPz6e0=#012np0005471146.localdomain,192.168.122.103,np0005471146* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICmMQkOJE522ttIEI6FiMBU6NgTQz2to1syfYlA1Memo#012np0005471146.localdomain,192.168.122.103,np0005471146* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEeJqmNJdbqm27rXqmy1Bcaw9svoUWZ+mqG5yOvqgawLTVR507UPdDgYoX7XGWbb81SzubbZqbU2YQpLzpWeEs4=#012np0005471150.localdomain,192.168.122.106,np0005471150* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCT5ftkzxR2Qyrkv4Bog+udHavLt9s9Di0AWsGW2RuyQQiM22RbERlEwcEpl46d2UZEA/h4vz9TbE4fxIRY43XsuoO7kScaRsaDEk80scoEanpXJXpL99y+HtDr7IiFnp920RFZWAvClhPuG5f4GTZcAH8JwlQdHLoU08owfBRpfZmDNZcoyX0tprcWQCD7KMlzpxwZFqhjkJVPrnq3lxWA9cG87b9CDA6sHuH8h4RYjBBtCOkxgTVQgBjGVWWjO64RQXgkKPObBX3sBjTYorcuu5af6cl8pwRuWCIDiskwHVqEvsdx7nXa+8le2b250IQoHti8LislYbkhX/LUO0TmKGbvUuzaK3gsuRGLxf+qG4UdCa7CYecLosB0sg0pv7c95e80sFtLwEFyKvUkMfEdbFIxMr03gd1i6lSeafCtY9Xk0sjkbJpMGaj2hsNlv1S6X8taFEHFuQyDEZ3ZkQXwxYkb0pqUef9Fn6d2VvlP4u7GHH+iQZtgv7NZrxvZOos=#012np0005471150.localdomain,192.168.122.106,np0005471150* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgKPJEV6wknnlU6vzKKYTIianKfcvSA46+IMP/yOIqt#012np0005471150.localdomain,192.168.122.106,np0005471150* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIaPYSDU/QOQ7ZadGCJmFA1TBpNbjPtGfciDHN2J4omWnXscBiFsDT0ajtGp7PFBlY4x2ml2I4zPhENaESWoYNQ=#012 create=True mode=0644 path=/tmp/ansible.kv9clk5i state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:48 localhost python3.9[146694]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.kv9clk5i' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:20:49 localhost python3.9[146788]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.kv9clk5i state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:20:50 localhost systemd[1]: session-46.scope: Deactivated successfully. Oct 5 05:20:50 localhost systemd[1]: session-46.scope: Consumed 4.312s CPU time. Oct 5 05:20:50 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Oct 5 05:20:50 localhost systemd-logind[760]: Removed session 46. Oct 5 05:20:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56739 DF PROTO=TCP SPT=53100 DPT=9100 SEQ=1541663399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75911FE0000000001030307) Oct 5 05:20:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4400 DF PROTO=TCP SPT=58340 DPT=9882 SEQ=2081770090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75915EC0000000001030307) Oct 5 05:20:56 localhost sshd[146803]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:20:56 localhost systemd-logind[760]: New session 47 of user zuul. Oct 5 05:20:56 localhost systemd[1]: Started Session 47 of User zuul. Oct 5 05:20:57 localhost python3.9[146896]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:20:58 localhost python3.9[146992]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 05:20:59 localhost python3.9[147086]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:21:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28703 DF PROTO=TCP SPT=60684 DPT=9102 SEQ=1139084662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75929260000000001030307) Oct 5 05:21:01 localhost python3.9[147179]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:21:02 localhost python3.9[147272]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:21:02 localhost python3.9[147366]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:21:03 localhost python3.9[147461]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:03 localhost systemd[1]: session-47.scope: Deactivated successfully. Oct 5 05:21:03 localhost systemd[1]: session-47.scope: Consumed 3.876s CPU time. Oct 5 05:21:03 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Oct 5 05:21:03 localhost systemd-logind[760]: Removed session 47. Oct 5 05:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10061 DF PROTO=TCP SPT=36356 DPT=9101 SEQ=1484760253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75935330000000001030307) Oct 5 05:21:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10062 DF PROTO=TCP SPT=36356 DPT=9101 SEQ=1484760253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75939370000000001030307) Oct 5 05:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10063 DF PROTO=TCP SPT=36356 DPT=9101 SEQ=1484760253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75941360000000001030307) Oct 5 05:21:09 localhost sshd[147476]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:21:09 localhost systemd-logind[760]: New session 48 of user zuul. Oct 5 05:21:09 localhost systemd[1]: Started Session 48 of User zuul. Oct 5 05:21:10 localhost python3.9[147569]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:21:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10064 DF PROTO=TCP SPT=36356 DPT=9101 SEQ=1484760253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75950F70000000001030307) Oct 5 05:21:12 localhost python3.9[147665]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:21:13 localhost python3.9[147719]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Oct 5 05:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54980 DF PROTO=TCP SPT=51018 DPT=9105 SEQ=29710660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75966DC0000000001030307) Oct 5 05:21:17 localhost python3.9[147873]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54981 DF PROTO=TCP SPT=51018 DPT=9105 SEQ=29710660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7596AF60000000001030307) Oct 5 05:21:19 localhost python3.9[147981]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:19 localhost python3.9[148073]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54982 DF PROTO=TCP SPT=51018 DPT=9105 SEQ=29710660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75972F70000000001030307) Oct 5 05:21:20 localhost python3.9[148165]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:21 localhost python3.9[148255]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:21:22 localhost python3.9[148345]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:21:22 localhost python3.9[148437]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:21:23 localhost systemd[1]: session-48.scope: Deactivated successfully. Oct 5 05:21:23 localhost systemd[1]: session-48.scope: Consumed 8.765s CPU time. Oct 5 05:21:23 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Oct 5 05:21:23 localhost systemd-logind[760]: Removed session 48. Oct 5 05:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54983 DF PROTO=TCP SPT=51018 DPT=9105 SEQ=29710660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75982B60000000001030307) Oct 5 05:21:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35577 DF PROTO=TCP SPT=60068 DPT=9100 SEQ=883918656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759872D0000000001030307) Oct 5 05:21:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50471 DF PROTO=TCP SPT=57756 DPT=9882 SEQ=817658973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7598B1C0000000001030307) Oct 5 05:21:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35578 DF PROTO=TCP SPT=60068 DPT=9100 SEQ=883918656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7598B360000000001030307) Oct 5 05:21:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50472 DF PROTO=TCP SPT=57756 DPT=9882 SEQ=817658973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7598F360000000001030307) Oct 5 05:21:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35579 DF PROTO=TCP SPT=60068 DPT=9100 SEQ=883918656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75993360000000001030307) Oct 5 05:21:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50473 DF PROTO=TCP SPT=57756 DPT=9882 SEQ=817658973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75997360000000001030307) Oct 5 05:21:30 localhost sshd[148452]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:21:30 localhost systemd-logind[760]: New session 49 of user zuul. Oct 5 05:21:30 localhost systemd[1]: Started Session 49 of User zuul. Oct 5 05:21:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26780 DF PROTO=TCP SPT=56766 DPT=9102 SEQ=17860369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7599E550000000001030307) Oct 5 05:21:31 localhost python3.9[148545]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:21:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26781 DF PROTO=TCP SPT=56766 DPT=9102 SEQ=17860369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759A2760000000001030307) Oct 5 05:21:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35580 DF PROTO=TCP SPT=60068 DPT=9100 SEQ=883918656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759A2F60000000001030307) Oct 5 05:21:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50474 DF PROTO=TCP SPT=57756 DPT=9882 SEQ=817658973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759A6F60000000001030307) Oct 5 05:21:33 localhost python3.9[148641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44402 DF PROTO=TCP SPT=53236 DPT=9101 SEQ=1834350730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759AA630000000001030307) Oct 5 05:21:34 localhost python3.9[148733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:35 localhost python3.9[148806]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656093.9406972-186-124542545347177/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:35 localhost python3.9[148898]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:36 localhost python3.9[148990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44404 DF PROTO=TCP SPT=53236 DPT=9101 SEQ=1834350730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759B6760000000001030307) Oct 5 05:21:37 localhost python3.9[149063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656096.083534-259-38582405874773/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:38 localhost python3.9[149155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:39 localhost python3.9[149247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:40 localhost python3.9[149320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656098.5224805-327-37616299367287/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:40 localhost python3.9[149412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44405 DF PROTO=TCP SPT=53236 DPT=9101 SEQ=1834350730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759C6360000000001030307) Oct 5 05:21:41 localhost python3.9[149504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:41 localhost python3.9[149577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656100.8683448-403-98252360739808/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:42 localhost python3.9[149669]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:43 localhost python3.9[149761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:43 localhost python3.9[149834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656102.838803-479-123510316021483/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:44 localhost python3.9[149926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:45 localhost python3.9[150018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:45 localhost python3.9[150091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656104.6445043-545-62954384011190/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:46 localhost python3.9[150183]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56532 DF PROTO=TCP SPT=60030 DPT=9105 SEQ=319682591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759DC0A0000000001030307) Oct 5 05:21:47 localhost python3.9[150275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:47 localhost python3.9[150348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656106.6171277-618-39125348049716/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56533 DF PROTO=TCP SPT=60030 DPT=9105 SEQ=319682591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759DFF60000000001030307) Oct 5 05:21:48 localhost python3.9[150440]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:21:48 localhost python3.9[150532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:49 localhost python3.9[150605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656108.4370534-687-229098564544235/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=19da67ae0728e4923b9ed6e1c3d1cab74d06d73f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56534 DF PROTO=TCP SPT=60030 DPT=9105 SEQ=319682591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759E7F70000000001030307) Oct 5 05:21:49 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Oct 5 05:21:49 localhost systemd[1]: session-49.scope: Deactivated successfully. Oct 5 05:21:49 localhost systemd[1]: session-49.scope: Consumed 11.822s CPU time. Oct 5 05:21:49 localhost systemd-logind[760]: Removed session 49. Oct 5 05:21:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56535 DF PROTO=TCP SPT=60030 DPT=9105 SEQ=319682591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC759F7B60000000001030307) Oct 5 05:21:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26997 DF PROTO=TCP SPT=57714 DPT=9882 SEQ=181514539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A004C0000000001030307) Oct 5 05:21:56 localhost sshd[150620]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:21:56 localhost systemd-logind[760]: New session 50 of user zuul. Oct 5 05:21:56 localhost systemd[1]: Started Session 50 of User zuul. Oct 5 05:21:57 localhost python3.9[150715]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:21:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58572 DF PROTO=TCP SPT=46336 DPT=9100 SEQ=3028669471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A08760000000001030307) Oct 5 05:21:58 localhost python3.9[150807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:21:59 localhost python3.9[150880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656118.041964-64-208890310141856/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d68e0db228a7d8458c08a66635a19e112f8e9d34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:00 localhost python3.9[150972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:00 localhost python3.9[151045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656119.5427217-64-233476115019908/.source.conf _original_basename=ceph.conf follow=False checksum=9ed326307220aa83db0d8ce552ee8014f398d5df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:00 localhost systemd[1]: session-50.scope: Deactivated successfully. Oct 5 05:22:00 localhost systemd[1]: session-50.scope: Consumed 2.322s CPU time. Oct 5 05:22:00 localhost systemd-logind[760]: Session 50 logged out. Waiting for processes to exit. Oct 5 05:22:00 localhost systemd-logind[760]: Removed session 50. Oct 5 05:22:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23859 DF PROTO=TCP SPT=51486 DPT=9102 SEQ=1683841294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A17770000000001030307) Oct 5 05:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59944 DF PROTO=TCP SPT=42422 DPT=9101 SEQ=457645589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A1F930000000001030307) Oct 5 05:22:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59946 DF PROTO=TCP SPT=42422 DPT=9101 SEQ=457645589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A2BB70000000001030307) Oct 5 05:22:07 localhost sshd[151060]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:22:07 localhost systemd-logind[760]: New session 51 of user zuul. Oct 5 05:22:07 localhost systemd[1]: Started Session 51 of User zuul. Oct 5 05:22:08 localhost python3.9[151153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:22:09 localhost python3.9[151249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:10 localhost python3.9[151341]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59947 DF PROTO=TCP SPT=42422 DPT=9101 SEQ=457645589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A3B760000000001030307) Oct 5 05:22:11 localhost python3.9[151431]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:22:12 localhost python3.9[151523]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 5 05:22:13 localhost python3.9[151615]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:22:14 localhost python3.9[151669]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:22:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13098 DF PROTO=TCP SPT=41788 DPT=9105 SEQ=2330358222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A513A0000000001030307) Oct 5 05:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13099 DF PROTO=TCP SPT=41788 DPT=9105 SEQ=2330358222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A55360000000001030307) Oct 5 05:22:19 localhost python3.9[151824]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:22:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13100 DF PROTO=TCP SPT=41788 DPT=9105 SEQ=2330358222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A5D360000000001030307) Oct 5 05:22:20 localhost python3[151934]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Oct 5 05:22:21 localhost python3.9[152026]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:21 localhost python3.9[152118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:22 localhost python3.9[152166]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:23 localhost python3.9[152258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:23 localhost python3.9[152306]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9lk8kcrp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13101 DF PROTO=TCP SPT=41788 DPT=9105 SEQ=2330358222 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A6CF60000000001030307) Oct 5 05:22:24 localhost python3.9[152398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:24 localhost python3.9[152446]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:25 localhost python3.9[152538]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32654 DF PROTO=TCP SPT=39812 DPT=9100 SEQ=3867298747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A75760000000001030307) Oct 5 05:22:26 localhost python3[152631]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:22:27 localhost python3.9[152723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:27 localhost python3.9[152798]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656146.6212137-434-223536751699296/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32655 DF PROTO=TCP SPT=39812 DPT=9100 SEQ=3867298747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A7D760000000001030307) Oct 5 05:22:28 localhost python3.9[152890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:28 localhost python3.9[152965]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656147.930411-479-206734760179489/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:30 localhost python3.9[153057]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:31 localhost python3.9[153132]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656149.7250588-524-157666364580739/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57018 DF PROTO=TCP SPT=53044 DPT=9102 SEQ=2406475316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A8CB60000000001030307) Oct 5 05:22:32 localhost python3.9[153224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:32 localhost python3.9[153299]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656151.5994644-568-191589154755676/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:33 localhost python3.9[153391]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:33 localhost python3.9[153466]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656152.7423477-613-143872279888845/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4027 DF PROTO=TCP SPT=33522 DPT=9101 SEQ=3252343161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75A94C30000000001030307) Oct 5 05:22:34 localhost python3.9[153558]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:35 localhost python3.9[153650]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:35 localhost python3.9[153745]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:36 localhost python3.9[153837]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4029 DF PROTO=TCP SPT=33522 DPT=9101 SEQ=3252343161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AA0B60000000001030307) Oct 5 05:22:37 localhost python3.9[153930]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:22:38 localhost python3.9[154024]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:38 localhost python3.9[154119]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:39 localhost python3.9[154209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:22:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4030 DF PROTO=TCP SPT=33522 DPT=9101 SEQ=3252343161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AB0760000000001030307) Oct 5 05:22:41 localhost python3.9[154302]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005471152.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:85:5b:92:b0" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:41 localhost ovs-vsctl[154303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005471152.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:85:5b:92:b0 external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Oct 5 05:22:42 localhost python3.9[154395]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:22:43 localhost python3.9[154488]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:22:44 localhost python3.9[154582]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:44 localhost python3.9[154674]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:45 localhost python3.9[154722]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:45 localhost python3.9[154814]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:46 localhost python3.9[154862]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59537 DF PROTO=TCP SPT=41922 DPT=9105 SEQ=563485989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AC66B0000000001030307) Oct 5 05:22:47 localhost python3.9[154954]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:47 localhost python3.9[155046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59538 DF PROTO=TCP SPT=41922 DPT=9105 SEQ=563485989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75ACA770000000001030307) Oct 5 05:22:48 localhost python3.9[155094]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:48 localhost python3.9[155186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:49 localhost python3.9[155234]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59539 DF PROTO=TCP SPT=41922 DPT=9105 SEQ=563485989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AD2770000000001030307) Oct 5 05:22:50 localhost python3.9[155326]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:22:50 localhost systemd[1]: Reloading. Oct 5 05:22:50 localhost systemd-sysv-generator[155354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:22:50 localhost systemd-rc-local-generator[155350]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:22:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:22:51 localhost python3.9[155456]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:52 localhost python3.9[155504]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:53 localhost python3.9[155596]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:53 localhost python3.9[155644]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59540 DF PROTO=TCP SPT=41922 DPT=9105 SEQ=563485989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AE2360000000001030307) Oct 5 05:22:54 localhost python3.9[155736]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:22:54 localhost systemd[1]: Reloading. Oct 5 05:22:54 localhost systemd-rc-local-generator[155761]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:22:54 localhost systemd-sysv-generator[155764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:22:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:22:54 localhost systemd[1]: Starting Create netns directory... Oct 5 05:22:54 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:22:54 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:22:54 localhost systemd[1]: Finished Create netns directory. Oct 5 05:22:55 localhost python3.9[155872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36295 DF PROTO=TCP SPT=59862 DPT=9882 SEQ=3716353108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AEAAC0000000001030307) Oct 5 05:22:56 localhost python3.9[155964]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:56 localhost python3.9[156037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656175.9147568-1346-53642041471002/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:57 localhost python3.9[156129]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:22:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3306 DF PROTO=TCP SPT=39452 DPT=9100 SEQ=2624079242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75AF2B70000000001030307) Oct 5 05:22:58 localhost python3.9[156221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:22:59 localhost python3.9[156296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656178.082194-1420-15385987542710/.source.json _original_basename=.1d8q64to follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:22:59 localhost python3.9[156388]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31167 DF PROTO=TCP SPT=56202 DPT=9102 SEQ=3263562729 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B01F70000000001030307) Oct 5 05:23:02 localhost python3.9[156645]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Oct 5 05:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23790 DF PROTO=TCP SPT=34130 DPT=9101 SEQ=2258619383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B09F30000000001030307) Oct 5 05:23:04 localhost python3.9[156737]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:23:04 localhost python3.9[156829]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23792 DF PROTO=TCP SPT=34130 DPT=9101 SEQ=2258619383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B15F70000000001030307) Oct 5 05:23:08 localhost python3[156948]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:23:09 localhost python3[156948]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "55a900d9f0d3284e9f7b4ec31d42a516ca3b16bc0ce186b6223860f9b9ee7269",#012 "Digest": "sha256:32b3cf3043ae552a67b716cf04bf0bdb981e8077ccb2893336edcc36bfd3946d",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:32b3cf3043ae552a67b716cf04bf0bdb981e8077ccb2893336edcc36bfd3946d"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:40:17.17546349Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345642952,#012 "VirtualSize": 345642952,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/60afe3546a98a201263be776cccb4442ad15a631184295cbccd8c923b430a1f8/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:5ff34b53abd092090c68bcc95bc461f0d3ee7243562df6154491ba8d09607eec",#012 "sha256:0b25eff48e4a51bccec814322a9b10589b6ba63d76de0828aaf9fdfd4dfb16c0"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Oct 5 05:23:09 localhost podman[157000]: 2025-10-05 09:23:09.264405287 +0000 UTC m=+0.093385797 container remove 2b1952e250ab9feaeb4c8d51c8b0c34db62ea4cc6d19ec20a40ee349a658fd5c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team) Oct 5 05:23:09 localhost python3[156948]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Oct 5 05:23:09 localhost podman[157013]: Oct 5 05:23:09 localhost podman[157013]: 2025-10-05 09:23:09.368696892 +0000 UTC m=+0.085619654 container create 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 05:23:09 localhost podman[157013]: 2025-10-05 09:23:09.327036541 +0000 UTC m=+0.043959373 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 5 05:23:09 localhost python3[156948]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Oct 5 05:23:10 localhost python3.9[157141]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:23:11 localhost python3.9[157235]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23793 DF PROTO=TCP SPT=34130 DPT=9101 SEQ=2258619383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B25B60000000001030307) Oct 5 05:23:11 localhost python3.9[157281]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:23:12 localhost python3.9[157372]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656192.0476882-1684-121191796542259/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:13 localhost python3.9[157418]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:23:13 localhost systemd[1]: Reloading. Oct 5 05:23:13 localhost systemd-rc-local-generator[157443]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:13 localhost systemd-sysv-generator[157446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:14 localhost python3.9[157499]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:23:14 localhost systemd[1]: Reloading. Oct 5 05:23:14 localhost systemd-sysv-generator[157526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:14 localhost systemd-rc-local-generator[157522]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:14 localhost systemd[1]: Starting ovn_controller container... Oct 5 05:23:14 localhost systemd[1]: Started libcrun container. Oct 5 05:23:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1fc05b8b25cf6b53931f79c4d93c9e1ba65dc2aa399b8951a1baf45a8d3321/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 05:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:23:14 localhost podman[157541]: 2025-10-05 09:23:14.901652449 +0000 UTC m=+0.153057020 container init 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:23:14 localhost systemd[1]: tmp-crun.R6bKIa.mount: Deactivated successfully. Oct 5 05:23:14 localhost ovn_controller[157556]: + sudo -E kolla_set_configs Oct 5 05:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:23:14 localhost podman[157541]: 2025-10-05 09:23:14.939401343 +0000 UTC m=+0.190805834 container start 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:23:14 localhost edpm-start-podman-container[157541]: ovn_controller Oct 5 05:23:14 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 05:23:14 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 05:23:14 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 05:23:15 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 05:23:15 localhost podman[157564]: 2025-10-05 09:23:15.040242713 +0000 UTC m=+0.092671057 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:23:15 localhost podman[157564]: 2025-10-05 09:23:15.058126142 +0000 UTC m=+0.110554526 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller) Oct 5 05:23:15 localhost podman[157564]: unhealthy Oct 5 05:23:15 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:23:15 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Failed with result 'exit-code'. Oct 5 05:23:15 localhost edpm-start-podman-container[157540]: Creating additional drop-in dependency for "ovn_controller" (70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c) Oct 5 05:23:15 localhost systemd[1]: Reloading. Oct 5 05:23:15 localhost systemd[157587]: Queued start job for default target Main User Target. Oct 5 05:23:15 localhost systemd[157587]: Created slice User Application Slice. Oct 5 05:23:15 localhost systemd[157587]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 05:23:15 localhost systemd[157587]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:23:15 localhost systemd[157587]: Reached target Paths. Oct 5 05:23:15 localhost systemd[157587]: Reached target Timers. Oct 5 05:23:15 localhost systemd[157587]: Starting D-Bus User Message Bus Socket... Oct 5 05:23:15 localhost systemd[157587]: Starting Create User's Volatile Files and Directories... Oct 5 05:23:15 localhost systemd[157587]: Listening on D-Bus User Message Bus Socket. Oct 5 05:23:15 localhost systemd[157587]: Reached target Sockets. Oct 5 05:23:15 localhost systemd[157587]: Finished Create User's Volatile Files and Directories. Oct 5 05:23:15 localhost systemd[157587]: Reached target Basic System. Oct 5 05:23:15 localhost systemd[157587]: Reached target Main User Target. Oct 5 05:23:15 localhost systemd[157587]: Startup finished in 120ms. Oct 5 05:23:15 localhost systemd-sysv-generator[157647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:15 localhost systemd-rc-local-generator[157640]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:15 localhost systemd[1]: Started User Manager for UID 0. Oct 5 05:23:15 localhost systemd[1]: Started ovn_controller container. Oct 5 05:23:15 localhost systemd[1]: Started Session c11 of User root. Oct 5 05:23:15 localhost ovn_controller[157556]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:23:15 localhost ovn_controller[157556]: INFO:__main__:Validating config file Oct 5 05:23:15 localhost ovn_controller[157556]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:23:15 localhost ovn_controller[157556]: INFO:__main__:Writing out command to execute Oct 5 05:23:15 localhost systemd[1]: session-c11.scope: Deactivated successfully. Oct 5 05:23:15 localhost ovn_controller[157556]: ++ cat /run_command Oct 5 05:23:15 localhost ovn_controller[157556]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Oct 5 05:23:15 localhost ovn_controller[157556]: + ARGS= Oct 5 05:23:15 localhost ovn_controller[157556]: + sudo kolla_copy_cacerts Oct 5 05:23:15 localhost systemd[1]: Started Session c12 of User root. Oct 5 05:23:15 localhost systemd[1]: session-c12.scope: Deactivated successfully. Oct 5 05:23:15 localhost ovn_controller[157556]: + [[ ! -n '' ]] Oct 5 05:23:15 localhost ovn_controller[157556]: + . kolla_extend_start Oct 5 05:23:15 localhost ovn_controller[157556]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Oct 5 05:23:15 localhost ovn_controller[157556]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Oct 5 05:23:15 localhost ovn_controller[157556]: + umask 0022 Oct 5 05:23:15 localhost ovn_controller[157556]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8] Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00004|main|INFO|OVS IDL reconnected, force recompute. Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00013|main|INFO|OVS feature set changed, force recompute. Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00021|main|INFO|OVS feature set changed, force recompute. Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:15 localhost ovn_controller[157556]: 2025-10-05T09:23:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Oct 5 05:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37168 DF PROTO=TCP SPT=46184 DPT=9105 SEQ=2417004891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B3B9A0000000001030307) Oct 5 05:23:17 localhost python3.9[157756]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:23:17 localhost ovs-vsctl[157757]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Oct 5 05:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37169 DF PROTO=TCP SPT=46184 DPT=9105 SEQ=2417004891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B3FB60000000001030307) Oct 5 05:23:17 localhost python3.9[157849]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:23:17 localhost ovs-vsctl[157851]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Oct 5 05:23:18 localhost python3.9[157944]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:23:18 localhost ovs-vsctl[157945]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Oct 5 05:23:19 localhost systemd[1]: session-51.scope: Deactivated successfully. Oct 5 05:23:19 localhost systemd[1]: session-51.scope: Consumed 40.929s CPU time. Oct 5 05:23:19 localhost systemd-logind[760]: Session 51 logged out. Waiting for processes to exit. Oct 5 05:23:19 localhost systemd-logind[760]: Removed session 51. Oct 5 05:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37170 DF PROTO=TCP SPT=46184 DPT=9105 SEQ=2417004891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B47B60000000001030307) Oct 5 05:23:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37171 DF PROTO=TCP SPT=46184 DPT=9105 SEQ=2417004891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B57760000000001030307) Oct 5 05:23:25 localhost sshd[158037]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:23:25 localhost systemd-logind[760]: New session 53 of user zuul. Oct 5 05:23:25 localhost systemd[1]: Started Session 53 of User zuul. Oct 5 05:23:25 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:23:25 localhost systemd[157587]: Activating special unit Exit the Session... Oct 5 05:23:25 localhost systemd[157587]: Stopped target Main User Target. Oct 5 05:23:25 localhost systemd[157587]: Stopped target Basic System. Oct 5 05:23:25 localhost systemd[157587]: Stopped target Paths. Oct 5 05:23:25 localhost systemd[157587]: Stopped target Sockets. Oct 5 05:23:25 localhost systemd[157587]: Stopped target Timers. Oct 5 05:23:25 localhost systemd[157587]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:23:25 localhost systemd[157587]: Closed D-Bus User Message Bus Socket. Oct 5 05:23:25 localhost systemd[157587]: Stopped Create User's Volatile Files and Directories. Oct 5 05:23:25 localhost systemd[157587]: Removed slice User Application Slice. Oct 5 05:23:25 localhost systemd[157587]: Reached target Shutdown. Oct 5 05:23:25 localhost systemd[157587]: Finished Exit the Session. Oct 5 05:23:25 localhost systemd[157587]: Reached target Exit the Session. Oct 5 05:23:25 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:23:25 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:23:25 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:23:25 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:23:25 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:23:25 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:23:25 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:23:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65394 DF PROTO=TCP SPT=47426 DPT=9882 SEQ=3222140107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B5FDC0000000001030307) Oct 5 05:23:26 localhost python3.9[158133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:23:27 localhost python3.9[158229]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18171 DF PROTO=TCP SPT=43364 DPT=9100 SEQ=393110613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B67F60000000001030307) Oct 5 05:23:28 localhost python3.9[158321]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:28 localhost python3.9[158413]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:29 localhost python3.9[158505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:30 localhost python3.9[158597]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:30 localhost python3.9[158687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:23:31 localhost python3.9[158779]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Oct 5 05:23:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59258 DF PROTO=TCP SPT=36436 DPT=9102 SEQ=3176346761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B77360000000001030307) Oct 5 05:23:32 localhost python3.9[158869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:33 localhost python3.9[158942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656211.8384366-221-209507113399318/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5842 DF PROTO=TCP SPT=42396 DPT=9101 SEQ=3722532617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B7F220000000001030307) Oct 5 05:23:34 localhost python3.9[159033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:34 localhost python3.9[159106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656213.9488106-265-277908166643456/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:36 localhost python3.9[159198]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:23:37 localhost python3.9[159252]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:23:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5844 DF PROTO=TCP SPT=42396 DPT=9101 SEQ=3722532617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B8B360000000001030307) Oct 5 05:23:41 localhost python3.9[159346]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:23:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5845 DF PROTO=TCP SPT=42396 DPT=9101 SEQ=3722532617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75B9AF70000000001030307) Oct 5 05:23:42 localhost python3.9[159439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:43 localhost python3.9[159510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656222.3859034-376-172116709773716/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:43 localhost python3.9[159600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:44 localhost python3.9[159671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656223.435452-376-33682926192656/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:23:45 localhost podman[159731]: 2025-10-05 09:23:45.899956482 +0000 UTC m=+0.064809256 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:23:45 localhost ovn_controller[157556]: 2025-10-05T09:23:45Z|00023|memory|INFO|12688 kB peak resident set size after 30.3 seconds Oct 5 05:23:45 localhost ovn_controller[157556]: 2025-10-05T09:23:45Z|00024|memory|INFO|idl-cells-OVN_Southbound:3978 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Oct 5 05:23:45 localhost podman[159731]: 2025-10-05 09:23:45.991691562 +0000 UTC m=+0.156544366 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:23:46 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:23:46 localhost python3.9[159782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:46 localhost python3.9[159856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656225.6944141-508-44619661365623/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17399 DF PROTO=TCP SPT=48984 DPT=9105 SEQ=2164617117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BB0CC0000000001030307) Oct 5 05:23:47 localhost python3.9[159946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17400 DF PROTO=TCP SPT=48984 DPT=9105 SEQ=2164617117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BB4B60000000001030307) Oct 5 05:23:48 localhost python3.9[160017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656226.7907338-508-262732321470275/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:48 localhost python3.9[160107]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:23:49 localhost python3.9[160201]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17401 DF PROTO=TCP SPT=48984 DPT=9105 SEQ=2164617117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BBCB70000000001030307) Oct 5 05:23:50 localhost python3.9[160293]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:50 localhost python3.9[160341]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:51 localhost python3.9[160433]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:51 localhost python3.9[160481]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:23:52 localhost python3.9[160573]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:52 localhost python3.9[160665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:53 localhost python3.9[160713]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17402 DF PROTO=TCP SPT=48984 DPT=9105 SEQ=2164617117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BCC760000000001030307) Oct 5 05:23:54 localhost python3.9[160805]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:54 localhost python3.9[160853]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:55 localhost python3.9[160945]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:23:55 localhost systemd[1]: Reloading. Oct 5 05:23:55 localhost systemd-rc-local-generator[160971]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:23:55 localhost systemd-sysv-generator[160976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:23:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:23:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53474 DF PROTO=TCP SPT=38690 DPT=9882 SEQ=1504346886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BD50C0000000001030307) Oct 5 05:23:56 localhost python3.9[161076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:57 localhost python3.9[161124]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:23:58 localhost python3.9[161216]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:23:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19885 DF PROTO=TCP SPT=46114 DPT=9100 SEQ=3253131979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BDD370000000001030307) Oct 5 05:23:59 localhost python3.9[161264]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:00 localhost python3.9[161356]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:00 localhost systemd[1]: Reloading. Oct 5 05:24:00 localhost systemd-sysv-generator[161381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:00 localhost systemd-rc-local-generator[161376]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:00 localhost systemd[1]: Starting Create netns directory... Oct 5 05:24:00 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:24:00 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:24:00 localhost systemd[1]: Finished Create netns directory. Oct 5 05:24:01 localhost python3.9[161489]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:24:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43244 DF PROTO=TCP SPT=55544 DPT=9102 SEQ=3474858335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BEC360000000001030307) Oct 5 05:24:02 localhost python3.9[161581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:24:02 localhost python3.9[161654]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656241.5413945-961-255174971747404/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:24:03 localhost python3.9[161746]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53209 DF PROTO=TCP SPT=56452 DPT=9101 SEQ=2397858298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75BF4530000000001030307) Oct 5 05:24:04 localhost python3.9[161838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:24:04 localhost python3.9[161913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656243.6182785-1036-138592893658818/.source.json _original_basename=.9rk71zs1 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:05 localhost python3.9[162005]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53211 DF PROTO=TCP SPT=56452 DPT=9101 SEQ=2397858298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C00760000000001030307) Oct 5 05:24:07 localhost python3.9[162262]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Oct 5 05:24:08 localhost python3.9[162354]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:24:09 localhost python3.9[162446]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:24:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53212 DF PROTO=TCP SPT=56452 DPT=9101 SEQ=2397858298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C10360000000001030307) Oct 5 05:24:13 localhost python3[162565]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:24:13 localhost python3[162565]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "484a8e9b317dc3c79222f8881637d84827689f07b39da081149288f7f4e4c6e5",#012 "Digest": "sha256:233c16d7dd07b08322829bae5a63ad7cffcf46ecf4af5469ace57d26ee006607",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:233c16d7dd07b08322829bae5a63ad7cffcf46ecf4af5469ace57d26ee006607"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:30:29.428510147Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784020738,#012 "VirtualSize": 784020738,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/5dec2b237273ccb78113c2b1c492ef164c4f5b231452e08517989bb84e3d4334/diff:/var/lib/containers/storage/overlay/742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:78752b72dcf3ae244a81cb8c65b7d5fdd7f58198588f5b7d6f1b871b40a43830",#012 "sha256:ae3018f56d99031ced3e0313d6ced246defa366d2edcaf6c9a695cd7ecd3992d",#012 "sha256:a6b2e01de070886feb7ef7949f5a4cea2598b7418a8c15d220d6eb5abb98b85b"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.con Oct 5 05:24:13 localhost podman[162615]: 2025-10-05 09:24:13.855638575 +0000 UTC m=+0.076645631 container remove 1804a1a8714adca10cc8c5637125242fdf570bad48c9cdbfb938baa2a2788379 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '61cb19106b923f6601e2c325a34cdd49'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Oct 5 05:24:13 localhost python3[162565]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Oct 5 05:24:13 localhost podman[162629]: Oct 5 05:24:13 localhost podman[162629]: 2025-10-05 09:24:13.951389819 +0000 UTC m=+0.080189861 container create 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:24:13 localhost podman[162629]: 2025-10-05 09:24:13.906044514 +0000 UTC m=+0.034844626 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:24:13 localhost python3[162565]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 05:24:14 localhost python3.9[162756]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:24:15 localhost python3.9[162850]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:15 localhost python3.9[162896]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:24:16 localhost podman[162988]: 2025-10-05 09:24:16.574649101 +0000 UTC m=+0.073089528 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 5 05:24:16 localhost podman[162988]: 2025-10-05 09:24:16.61626039 +0000 UTC m=+0.114700807 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller) Oct 5 05:24:16 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:24:16 localhost python3.9[162987]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656256.0583212-1300-138892321741656/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40447 DF PROTO=TCP SPT=46548 DPT=9105 SEQ=558724975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C25FB0000000001030307) Oct 5 05:24:17 localhost python3.9[163058]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:24:17 localhost systemd[1]: Reloading. Oct 5 05:24:17 localhost systemd-rc-local-generator[163085]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:17 localhost systemd-sysv-generator[163089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40448 DF PROTO=TCP SPT=46548 DPT=9105 SEQ=558724975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C29F60000000001030307) Oct 5 05:24:18 localhost python3.9[163140]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:18 localhost systemd[1]: Reloading. Oct 5 05:24:18 localhost systemd-sysv-generator[163168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:18 localhost systemd-rc-local-generator[163164]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:18 localhost systemd[1]: Starting ovn_metadata_agent container... Oct 5 05:24:18 localhost systemd[1]: tmp-crun.0wobxC.mount: Deactivated successfully. Oct 5 05:24:18 localhost systemd[1]: Started libcrun container. Oct 5 05:24:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ffe00f703c916bb78a61c9a8282a77ea3e21e5ff1c8a09bd9afd79c4fd7530/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:24:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ffe00f703c916bb78a61c9a8282a77ea3e21e5ff1c8a09bd9afd79c4fd7530/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:24:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:24:18 localhost podman[163181]: 2025-10-05 09:24:18.738361439 +0000 UTC m=+0.143622464 container init 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + sudo -E kolla_set_configs Oct 5 05:24:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:24:18 localhost podman[163181]: 2025-10-05 09:24:18.778365921 +0000 UTC m=+0.183626946 container start 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:24:18 localhost edpm-start-podman-container[163181]: ovn_metadata_agent Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Validating config file Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Copying service configuration files Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Writing out command to execute Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: ++ cat /run_command Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + CMD=neutron-ovn-metadata-agent Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + ARGS= Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + sudo kolla_copy_cacerts Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + [[ ! -n '' ]] Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + . kolla_extend_start Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: Running command: 'neutron-ovn-metadata-agent' Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + umask 0022 Oct 5 05:24:18 localhost ovn_metadata_agent[163196]: + exec neutron-ovn-metadata-agent Oct 5 05:24:18 localhost podman[163204]: 2025-10-05 09:24:18.872707456 +0000 UTC m=+0.088948451 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent) Oct 5 05:24:18 localhost podman[163204]: 2025-10-05 09:24:18.952564476 +0000 UTC m=+0.168805461 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:24:18 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:24:18 localhost edpm-start-podman-container[163180]: Creating additional drop-in dependency for "ovn_metadata_agent" (2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01) Oct 5 05:24:18 localhost systemd[1]: Reloading. Oct 5 05:24:19 localhost systemd-sysv-generator[163271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:19 localhost systemd-rc-local-generator[163268]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:19 localhost systemd[1]: Started ovn_metadata_agent container. Oct 5 05:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40449 DF PROTO=TCP SPT=46548 DPT=9105 SEQ=558724975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C31F70000000001030307) Oct 5 05:24:20 localhost systemd[1]: session-53.scope: Deactivated successfully. Oct 5 05:24:20 localhost systemd[1]: session-53.scope: Consumed 31.760s CPU time. Oct 5 05:24:20 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Oct 5 05:24:20 localhost systemd-logind[760]: Removed session 53. Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.325 163201 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.325 163201 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.325 163201 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.326 163201 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.327 163201 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.328 163201 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.329 163201 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.330 163201 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.331 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.332 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.333 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.334 163201 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.335 163201 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.336 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.337 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.338 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.339 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.340 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.341 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.342 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.343 163201 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.344 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.345 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.346 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.347 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.348 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.349 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.350 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.351 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.352 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.353 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.354 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.355 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.356 163201 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.356 163201 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.397 163201 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.397 163201 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.398 163201 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.398 163201 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.398 163201 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.412 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c2abb7f3-ae8d-4817-a99b-01536f41e92b (UUID: c2abb7f3-ae8d-4817-a99b-01536f41e92b) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.431 163201 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.431 163201 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.431 163201 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.431 163201 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.433 163201 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.435 163201 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.443 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c2abb7f3-ae8d-4817-a99b-01536f41e92b'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': 'd4f299b0-f580-5e27-a5ab-cc540e21ffa9', 'neutron:ovn-metadata-sb-cfg': '1'}, name=c2abb7f3-ae8d-4817-a99b-01536f41e92b, nb_cfg_timestamp=1759656205268, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.444 163201 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.444 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.445 163201 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.445 163201 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.445 163201 INFO oslo_service.service [-] Starting 1 workers#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.447 163201 DEBUG oslo_service.service [-] Started child 163299 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.449 163201 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmps8fv202x/privsep.sock']#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.451 163299 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169891'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.473 163299 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.474 163299 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.474 163299 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.476 163299 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.477 163299 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Oct 5 05:24:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.489 163299 INFO eventlet.wsgi.server [-] (163299) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.016 163201 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.017 163201 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps8fv202x/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.911 163334 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.916 163334 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.920 163334 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:20.920 163334 INFO oslo.privsep.daemon [-] privsep daemon running as pid 163334#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.020 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0ecb84-ac51-454c-b875-95265fdaca96]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.421 163334 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.421 163334 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.421 163334 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.854 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[08c92679-232c-46e1-8b2a-d8020810d240]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.857 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, column=external_ids, values=({'neutron:ovn-metadata-id': 'd4f299b0-f580-5e27-a5ab-cc540e21ffa9'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.858 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.859 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.870 163201 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.871 163201 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.871 163201 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.871 163201 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.871 163201 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.871 163201 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.872 163201 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.872 163201 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.872 163201 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.872 163201 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.873 163201 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.873 163201 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.873 163201 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.873 163201 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.874 163201 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.874 163201 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.874 163201 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.874 163201 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.874 163201 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.875 163201 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.875 163201 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.875 163201 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.875 163201 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.876 163201 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.876 163201 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.876 163201 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.876 163201 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.877 163201 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.877 163201 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.877 163201 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.877 163201 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.878 163201 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.878 163201 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.878 163201 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.878 163201 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.879 163201 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.879 163201 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.879 163201 DEBUG oslo_service.service [-] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.879 163201 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.880 163201 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.880 163201 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.880 163201 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.880 163201 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.881 163201 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.881 163201 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.881 163201 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.881 163201 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.881 163201 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.882 163201 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.882 163201 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.882 163201 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.882 163201 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.882 163201 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.883 163201 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.883 163201 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.883 163201 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.883 163201 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.883 163201 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.884 163201 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.884 163201 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.884 163201 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.884 163201 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.885 163201 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.885 163201 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.885 163201 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.885 163201 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.886 163201 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.886 163201 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.886 163201 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.886 163201 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.886 163201 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.887 163201 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.887 163201 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.887 163201 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.887 163201 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.887 163201 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.888 163201 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.888 163201 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.888 163201 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.888 163201 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.889 163201 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.889 163201 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.889 163201 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.889 163201 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.889 163201 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.890 163201 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.890 163201 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.890 163201 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.890 163201 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.891 163201 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.891 163201 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.891 163201 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.891 163201 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.891 163201 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.892 163201 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.892 163201 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.892 163201 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.892 163201 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.892 163201 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.893 163201 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.893 163201 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.893 163201 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.893 163201 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.893 163201 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.894 163201 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.894 163201 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.894 163201 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.894 163201 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.895 163201 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.895 163201 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.895 163201 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.895 163201 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.895 163201 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.896 163201 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.896 163201 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.896 163201 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.896 163201 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.897 163201 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.897 163201 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.897 163201 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.897 163201 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.898 163201 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.898 163201 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.898 163201 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.898 163201 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.899 163201 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.899 163201 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.899 163201 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.899 163201 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.899 163201 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.900 163201 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.900 163201 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.900 163201 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.900 163201 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.901 163201 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.901 163201 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.901 163201 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.901 163201 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.901 163201 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.902 163201 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.902 163201 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.902 163201 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.902 163201 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.903 163201 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.903 163201 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.903 163201 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.903 163201 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.903 163201 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.904 163201 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.904 163201 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.904 163201 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.905 163201 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.905 163201 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.905 163201 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.905 163201 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.905 163201 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.906 163201 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.906 163201 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.906 163201 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.906 163201 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.906 163201 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.907 163201 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.907 163201 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.907 163201 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.907 163201 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.908 163201 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.908 163201 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.908 163201 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.908 163201 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.909 163201 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.909 163201 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.909 163201 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.909 163201 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.910 163201 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.910 163201 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.910 163201 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.910 163201 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.911 163201 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.911 163201 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.911 163201 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.911 163201 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.911 163201 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.912 163201 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.912 163201 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.912 163201 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.913 163201 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.913 163201 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.913 163201 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.913 163201 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.913 163201 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.914 163201 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.914 163201 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.914 163201 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.914 163201 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.915 163201 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.915 163201 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.915 163201 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.915 163201 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.915 163201 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.916 163201 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.916 163201 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.916 163201 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.916 163201 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.916 163201 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.917 163201 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.918 163201 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.919 163201 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.920 163201 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.921 163201 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.922 163201 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.923 163201 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.924 163201 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.925 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.926 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.927 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.928 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.929 163201 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.930 163201 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:24:21 localhost ovn_metadata_agent[163196]: 2025-10-05 09:24:21.930 163201 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:24:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40450 DF PROTO=TCP SPT=46548 DPT=9105 SEQ=558724975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C41B60000000001030307) Oct 5 05:24:25 localhost sshd[163387]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:24:25 localhost systemd-logind[760]: New session 54 of user zuul. Oct 5 05:24:25 localhost systemd[1]: Started Session 54 of User zuul. Oct 5 05:24:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57285 DF PROTO=TCP SPT=56944 DPT=9882 SEQ=712228722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C4A3C0000000001030307) Oct 5 05:24:26 localhost python3.9[163480]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:24:27 localhost python3.9[163576]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50891 DF PROTO=TCP SPT=33532 DPT=9100 SEQ=1358456903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C52760000000001030307) Oct 5 05:24:28 localhost python3.9[163680]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:28 localhost systemd[1]: libpod-ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1.scope: Deactivated successfully. Oct 5 05:24:28 localhost podman[163681]: 2025-10-05 09:24:28.278237245 +0000 UTC m=+0.073303694 container died ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, build-date=2025-07-21T14:56:59, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Oct 5 05:24:28 localhost systemd[1]: tmp-crun.DQhtmV.mount: Deactivated successfully. Oct 5 05:24:28 localhost podman[163681]: 2025-10-05 09:24:28.311869116 +0000 UTC m=+0.106935575 container cleanup ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:24:28 localhost podman[163696]: 2025-10-05 09:24:28.360564876 +0000 UTC m=+0.076806664 container remove ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1) Oct 5 05:24:28 localhost systemd[1]: libpod-conmon-ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1.scope: Deactivated successfully. Oct 5 05:24:29 localhost systemd[1]: var-lib-containers-storage-overlay-987c8818be76af06807da2048ae7d1664e12d00146f4e3ab569d4620a3bc5442-merged.mount: Deactivated successfully. Oct 5 05:24:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef67fcb28b3678bd5a2609ba968b6f8a8f5dd4c522fcde1fe5acf87ee85de3e1-userdata-shm.mount: Deactivated successfully. Oct 5 05:24:29 localhost python3.9[163802]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:24:29 localhost systemd[1]: Reloading. Oct 5 05:24:29 localhost systemd-rc-local-generator[163829]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:29 localhost systemd-sysv-generator[163833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:31 localhost python3.9[163928]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:24:31 localhost network[163945]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:24:31 localhost network[163946]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:24:31 localhost network[163947]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:24:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17739 DF PROTO=TCP SPT=34048 DPT=9102 SEQ=600957124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C61770000000001030307) Oct 5 05:24:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24477 DF PROTO=TCP SPT=47662 DPT=9101 SEQ=2184323411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C69830000000001030307) Oct 5 05:24:35 localhost python3.9[164149]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:35 localhost systemd[1]: Reloading. Oct 5 05:24:35 localhost systemd-sysv-generator[164178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:35 localhost systemd-rc-local-generator[164175]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:36 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Oct 5 05:24:36 localhost python3.9[164281]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24479 DF PROTO=TCP SPT=47662 DPT=9101 SEQ=2184323411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C75770000000001030307) Oct 5 05:24:37 localhost python3.9[164374]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:38 localhost python3.9[164467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:39 localhost python3.9[164560]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:39 localhost python3.9[164653]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:40 localhost python3.9[164746]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:24:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24480 DF PROTO=TCP SPT=47662 DPT=9101 SEQ=2184323411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C85360000000001030307) Oct 5 05:24:42 localhost python3.9[164839]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:43 localhost python3.9[164931]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:44 localhost python3.9[165023]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:44 localhost python3.9[165115]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:45 localhost python3.9[165207]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:46 localhost python3.9[165299]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5637 DF PROTO=TCP SPT=56854 DPT=9105 SEQ=2141485196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C9B2B0000000001030307) Oct 5 05:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:24:46 localhost podman[165372]: 2025-10-05 09:24:46.933871387 +0000 UTC m=+0.095939642 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:24:46 localhost podman[165372]: 2025-10-05 09:24:46.975170386 +0000 UTC m=+0.137238611 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:24:46 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:24:47 localhost python3.9[165406]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5638 DF PROTO=TCP SPT=56854 DPT=9105 SEQ=2141485196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75C9F360000000001030307) Oct 5 05:24:47 localhost python3.9[165508]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:48 localhost systemd[1]: Starting dnf makecache... Oct 5 05:24:48 localhost python3.9[165600]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:48 localhost dnf[165601]: Updating Subscription Management repositories. Oct 5 05:24:49 localhost python3.9[165693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:24:49 localhost podman[165786]: 2025-10-05 09:24:49.548707078 +0000 UTC m=+0.094664175 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:24:49 localhost systemd[1]: tmp-crun.pNFzBk.mount: Deactivated successfully. Oct 5 05:24:49 localhost podman[165786]: 2025-10-05 09:24:49.558173148 +0000 UTC m=+0.104130255 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Oct 5 05:24:49 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:24:49 localhost python3.9[165785]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5639 DF PROTO=TCP SPT=56854 DPT=9105 SEQ=2141485196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CA7360000000001030307) Oct 5 05:24:50 localhost dnf[165601]: Metadata cache refreshed recently. Oct 5 05:24:50 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Oct 5 05:24:50 localhost systemd[1]: Finished dnf makecache. Oct 5 05:24:50 localhost systemd[1]: dnf-makecache.service: Consumed 1.882s CPU time. Oct 5 05:24:50 localhost python3.9[165895]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:50 localhost python3.9[165987]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:51 localhost python3.9[166079]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:24:52 localhost python3.9[166171]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:53 localhost python3.9[166263]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:24:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5640 DF PROTO=TCP SPT=56854 DPT=9105 SEQ=2141485196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CB6F60000000001030307) Oct 5 05:24:54 localhost python3.9[166355]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:24:54 localhost systemd[1]: Reloading. Oct 5 05:24:54 localhost systemd-rc-local-generator[166379]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:24:54 localhost systemd-sysv-generator[166382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:24:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:24:55 localhost python3.9[166483]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17544 DF PROTO=TCP SPT=32846 DPT=9882 SEQ=3394732401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CBF6C0000000001030307) Oct 5 05:24:56 localhost python3.9[166576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:56 localhost python3.9[166669]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56513 DF PROTO=TCP SPT=34124 DPT=9100 SEQ=2381847403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CC7770000000001030307) Oct 5 05:24:58 localhost python3.9[166762]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:58 localhost python3.9[166855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:59 localhost python3.9[166948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:24:59 localhost python3.9[167041]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:25:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:25:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.016 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564cf43c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Oct 5 05:25:00 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 76.0 (253 of 333 items), suggesting rotation. Oct 5 05:25:00 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:25:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:25:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:25:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24278 DF PROTO=TCP SPT=33350 DPT=9102 SEQ=2351590525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CD6B60000000001030307) Oct 5 05:25:03 localhost python3.9[167135]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Oct 5 05:25:03 localhost python3.9[167228]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20145 DF PROTO=TCP SPT=39110 DPT=9101 SEQ=1580814987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CDEB30000000001030307) Oct 5 05:25:04 localhost python3.9[167326]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471152.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 5 05:25:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:25:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5661 writes, 24K keys, 5661 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5661 writes, 723 syncs, 7.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.012 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55656af1a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Oct 5 05:25:06 localhost python3.9[167426]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:25:06 localhost python3.9[167480]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:25:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20147 DF PROTO=TCP SPT=39110 DPT=9101 SEQ=1580814987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CEAB60000000001030307) Oct 5 05:25:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20148 DF PROTO=TCP SPT=39110 DPT=9101 SEQ=1580814987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75CFA760000000001030307) Oct 5 05:25:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64560 DF PROTO=TCP SPT=50770 DPT=9105 SEQ=3472621228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D105A0000000001030307) Oct 5 05:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64561 DF PROTO=TCP SPT=50770 DPT=9105 SEQ=3472621228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D14760000000001030307) Oct 5 05:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:25:17 localhost podman[167555]: 2025-10-05 09:25:17.928486764 +0000 UTC m=+0.093198911 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:25:18 localhost podman[167555]: 2025-10-05 09:25:18.016149787 +0000 UTC m=+0.180861934 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:25:18 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:25:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:25:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64562 DF PROTO=TCP SPT=50770 DPT=9105 SEQ=3472621228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D1C760000000001030307) Oct 5 05:25:19 localhost systemd[1]: tmp-crun.V0Utna.mount: Deactivated successfully. Oct 5 05:25:19 localhost podman[167580]: 2025-10-05 09:25:19.9082896 +0000 UTC m=+0.083244742 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 05:25:19 localhost podman[167580]: 2025-10-05 09:25:19.917041721 +0000 UTC m=+0.091996943 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Oct 5 05:25:19 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:25:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:25:20.357 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:25:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:25:20.358 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:25:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:25:20.358 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:25:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64563 DF PROTO=TCP SPT=50770 DPT=9105 SEQ=3472621228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D2C360000000001030307) Oct 5 05:25:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64356 DF PROTO=TCP SPT=33082 DPT=9882 SEQ=1190215893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D349C0000000001030307) Oct 5 05:25:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35533 DF PROTO=TCP SPT=44388 DPT=9100 SEQ=1289902959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D3CB60000000001030307) Oct 5 05:25:31 localhost kernel: SELinux: Converting 2746 SID table entries... Oct 5 05:25:31 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:31 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:31 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3667 DF PROTO=TCP SPT=51980 DPT=9102 SEQ=2295977961 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D4BF60000000001030307) Oct 5 05:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26568 DF PROTO=TCP SPT=55300 DPT=9101 SEQ=1116807288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D53E30000000001030307) Oct 5 05:25:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26570 DF PROTO=TCP SPT=55300 DPT=9101 SEQ=1116807288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D5FF60000000001030307) Oct 5 05:25:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26571 DF PROTO=TCP SPT=55300 DPT=9101 SEQ=1116807288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D6FB60000000001030307) Oct 5 05:25:41 localhost kernel: SELinux: Converting 2749 SID table entries... Oct 5 05:25:41 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:41 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:41 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33612 DF PROTO=TCP SPT=33642 DPT=9105 SEQ=1806092871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D858B0000000001030307) Oct 5 05:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33613 DF PROTO=TCP SPT=33642 DPT=9105 SEQ=1806092871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D89760000000001030307) Oct 5 05:25:48 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=20 res=1 Oct 5 05:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:25:48 localhost systemd[1]: tmp-crun.Cjkv9j.mount: Deactivated successfully. Oct 5 05:25:48 localhost podman[168662]: 2025-10-05 09:25:48.942128868 +0000 UTC m=+0.099580830 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 05:25:48 localhost podman[168662]: 2025-10-05 09:25:48.977924745 +0000 UTC m=+0.135376667 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller) Oct 5 05:25:48 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:25:49 localhost kernel: SELinux: Converting 2749 SID table entries... Oct 5 05:25:49 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:49 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:49 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:49 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:49 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:49 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:49 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33614 DF PROTO=TCP SPT=33642 DPT=9105 SEQ=1806092871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75D91770000000001030307) Oct 5 05:25:50 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=21 res=1 Oct 5 05:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:25:50 localhost podman[168694]: 2025-10-05 09:25:50.929041357 +0000 UTC m=+0.090932083 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:25:50 localhost podman[168694]: 2025-10-05 09:25:50.939871819 +0000 UTC m=+0.101762595 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:25:50 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:25:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33615 DF PROTO=TCP SPT=33642 DPT=9105 SEQ=1806092871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DA1360000000001030307) Oct 5 05:25:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16909 DF PROTO=TCP SPT=53820 DPT=9882 SEQ=1920041655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DA9CC0000000001030307) Oct 5 05:25:57 localhost kernel: SELinux: Converting 2749 SID table entries... Oct 5 05:25:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:25:57 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:25:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:25:57 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:25:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:25:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:25:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:25:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19663 DF PROTO=TCP SPT=42336 DPT=9100 SEQ=2183674947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DB1F70000000001030307) Oct 5 05:26:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55621 DF PROTO=TCP SPT=52232 DPT=9102 SEQ=3103638803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DC0F60000000001030307) Oct 5 05:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8332 DF PROTO=TCP SPT=46340 DPT=9101 SEQ=1171188995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DC9130000000001030307) Oct 5 05:26:06 localhost kernel: SELinux: Converting 2749 SID table entries... Oct 5 05:26:06 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:26:06 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:26:06 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:26:06 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:26:06 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:26:06 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:26:06 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:26:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8334 DF PROTO=TCP SPT=46340 DPT=9101 SEQ=1171188995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DD5370000000001030307) Oct 5 05:26:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8335 DF PROTO=TCP SPT=46340 DPT=9101 SEQ=1171188995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DE4F60000000001030307) Oct 5 05:26:15 localhost kernel: SELinux: Converting 2749 SID table entries... Oct 5 05:26:15 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:26:15 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:26:15 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:26:15 localhost systemd[1]: Reloading. Oct 5 05:26:15 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=24 res=1 Oct 5 05:26:15 localhost systemd-rc-local-generator[168766]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:26:15 localhost systemd-sysv-generator[168773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:26:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:26:16 localhost systemd[1]: Reloading. Oct 5 05:26:16 localhost systemd-sysv-generator[168807]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:26:16 localhost systemd-rc-local-generator[168802]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:26:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:26:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24402 DF PROTO=TCP SPT=46706 DPT=9105 SEQ=2415232721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DFABA0000000001030307) Oct 5 05:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24403 DF PROTO=TCP SPT=46706 DPT=9105 SEQ=2415232721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75DFEB60000000001030307) Oct 5 05:26:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24404 DF PROTO=TCP SPT=46706 DPT=9105 SEQ=2415232721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E06B70000000001030307) Oct 5 05:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:26:19 localhost podman[168824]: 2025-10-05 09:26:19.917512963 +0000 UTC m=+0.083439341 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:26:19 localhost podman[168824]: 2025-10-05 09:26:19.951602085 +0000 UTC m=+0.117528473 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2) Oct 5 05:26:19 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:26:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:26:20.359 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:26:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:26:20.360 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:26:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:26:20.360 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:26:21 localhost podman[168850]: 2025-10-05 09:26:21.927853763 +0000 UTC m=+0.099938973 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:26:21 localhost podman[168850]: 2025-10-05 09:26:21.935075341 +0000 UTC m=+0.107160531 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:26:21 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:26:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24405 DF PROTO=TCP SPT=46706 DPT=9105 SEQ=2415232721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E16760000000001030307) Oct 5 05:26:24 localhost kernel: SELinux: Converting 2750 SID table entries... Oct 5 05:26:24 localhost kernel: SELinux: policy capability network_peer_controls=1 Oct 5 05:26:24 localhost kernel: SELinux: policy capability open_perms=1 Oct 5 05:26:24 localhost kernel: SELinux: policy capability extended_socket_class=1 Oct 5 05:26:24 localhost kernel: SELinux: policy capability always_check_network=0 Oct 5 05:26:24 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Oct 5 05:26:24 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 5 05:26:24 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Oct 5 05:26:25 localhost dbus-broker-launch[755]: avc: op=load_policy lsm=selinux seqno=25 res=1 Oct 5 05:26:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53986 DF PROTO=TCP SPT=42610 DPT=9100 SEQ=1667587040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E1EF60000000001030307) Oct 5 05:26:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53987 DF PROTO=TCP SPT=42610 DPT=9100 SEQ=1667587040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E26F70000000001030307) Oct 5 05:26:28 localhost dbus-broker-launch[751]: Noticed file-system modification, trigger reload. Oct 5 05:26:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58434 DF PROTO=TCP SPT=51330 DPT=9102 SEQ=1908376922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E36360000000001030307) Oct 5 05:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3352 DF PROTO=TCP SPT=42824 DPT=9101 SEQ=769395153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E3E430000000001030307) Oct 5 05:26:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3354 DF PROTO=TCP SPT=42824 DPT=9101 SEQ=769395153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E4A360000000001030307) Oct 5 05:26:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3355 DF PROTO=TCP SPT=42824 DPT=9101 SEQ=769395153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E59F60000000001030307) Oct 5 05:26:44 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:3e:99:36 MACPROTO=0800 SRC=162.142.125.84 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=54 ID=24548 PROTO=TCP SPT=48553 DPT=9090 SEQ=365452750 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A68DC1AAF000000000103030A) Oct 5 05:26:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51554 DF PROTO=TCP SPT=50514 DPT=9105 SEQ=1065298233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E6FEC0000000001030307) Oct 5 05:26:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51556 DF PROTO=TCP SPT=50514 DPT=9105 SEQ=1065298233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E7BF60000000001030307) Oct 5 05:26:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:26:50 localhost podman[175518]: 2025-10-05 09:26:50.942047218 +0000 UTC m=+0.092561370 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:26:50 localhost podman[175518]: 2025-10-05 09:26:50.978921966 +0000 UTC m=+0.129436088 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:26:50 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:26:52 localhost podman[177036]: 2025-10-05 09:26:52.917277758 +0000 UTC m=+0.079130033 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent) Oct 5 05:26:52 localhost podman[177036]: 2025-10-05 09:26:52.950135896 +0000 UTC m=+0.111988131 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:26:52 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:26:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51557 DF PROTO=TCP SPT=50514 DPT=9105 SEQ=1065298233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E8BB60000000001030307) Oct 5 05:26:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61568 DF PROTO=TCP SPT=53018 DPT=9882 SEQ=1511580135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E942C0000000001030307) Oct 5 05:26:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34051 DF PROTO=TCP SPT=36100 DPT=9100 SEQ=3577303240 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75E9C360000000001030307) Oct 5 05:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64681 DF PROTO=TCP SPT=35806 DPT=9102 SEQ=719852570 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75EAB760000000001030307) Oct 5 05:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=466 DF PROTO=TCP SPT=37548 DPT=9101 SEQ=12044152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75EB3730000000001030307) Oct 5 05:27:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=468 DF PROTO=TCP SPT=37548 DPT=9101 SEQ=12044152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75EBF760000000001030307) Oct 5 05:27:09 localhost systemd[1]: Stopping OpenSSH server daemon... Oct 5 05:27:09 localhost systemd[1]: sshd.service: Deactivated successfully. Oct 5 05:27:09 localhost systemd[1]: Stopped OpenSSH server daemon. Oct 5 05:27:09 localhost systemd[1]: sshd.service: Consumed 1.011s CPU time, no IO. Oct 5 05:27:09 localhost systemd[1]: Stopped target sshd-keygen.target. Oct 5 05:27:09 localhost systemd[1]: Stopping sshd-keygen.target... Oct 5 05:27:09 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:27:09 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:27:09 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Oct 5 05:27:09 localhost systemd[1]: Reached target sshd-keygen.target. Oct 5 05:27:09 localhost systemd[1]: Starting OpenSSH server daemon... Oct 5 05:27:09 localhost sshd[186871]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:27:09 localhost systemd[1]: Started OpenSSH server daemon. Oct 5 05:27:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=469 DF PROTO=TCP SPT=37548 DPT=9101 SEQ=12044152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75ECF370000000001030307) Oct 5 05:27:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:27:11 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 05:27:11 localhost systemd[1]: Reloading. Oct 5 05:27:11 localhost systemd-rc-local-generator[187101]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:11 localhost systemd-sysv-generator[187104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:11 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 05:27:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14234 DF PROTO=TCP SPT=43632 DPT=9105 SEQ=1743809658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75EE51B0000000001030307) Oct 5 05:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14235 DF PROTO=TCP SPT=43632 DPT=9105 SEQ=1743809658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75EE9360000000001030307) Oct 5 05:27:19 localhost python3.9[194038]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:19 localhost systemd[1]: Reloading. Oct 5 05:27:19 localhost systemd-sysv-generator[194287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:19 localhost systemd-rc-local-generator[194283]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14236 DF PROTO=TCP SPT=43632 DPT=9105 SEQ=1743809658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75EF1360000000001030307) Oct 5 05:27:20 localhost python3.9[194887]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:20 localhost systemd[1]: Reloading. Oct 5 05:27:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:27:20.360 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:27:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:27:20.361 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:27:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:27:20.361 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:27:20 localhost systemd-sysv-generator[195217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:20 localhost systemd-rc-local-generator[195212]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:27:21 localhost systemd[1]: tmp-crun.IqV3kX.mount: Deactivated successfully. Oct 5 05:27:21 localhost podman[195646]: 2025-10-05 09:27:21.204272403 +0000 UTC m=+0.101856721 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 05:27:21 localhost podman[195646]: 2025-10-05 09:27:21.250106144 +0000 UTC m=+0.147690492 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:27:21 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:27:21 localhost python3.9[195645]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:21 localhost systemd[1]: Reloading. Oct 5 05:27:21 localhost systemd-sysv-generator[195938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:21 localhost systemd-rc-local-generator[195933]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:22 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 05:27:22 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 05:27:22 localhost systemd[1]: man-db-cache-update.service: Consumed 13.535s CPU time. Oct 5 05:27:22 localhost systemd[1]: run-r701fa0617b074128b3ef0a8a0082eb69.service: Deactivated successfully. Oct 5 05:27:22 localhost systemd[1]: run-r83ff2c74256742f8a6e9ad6379bf2c07.service: Deactivated successfully. Oct 5 05:27:22 localhost python3.9[196285]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:22 localhost systemd[1]: Reloading. Oct 5 05:27:22 localhost systemd-rc-local-generator[196407]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:22 localhost systemd-sysv-generator[196410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:27:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14237 DF PROTO=TCP SPT=43632 DPT=9105 SEQ=1743809658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F00F60000000001030307) Oct 5 05:27:23 localhost podman[196434]: 2025-10-05 09:27:23.914639337 +0000 UTC m=+0.079615404 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent) Oct 5 05:27:23 localhost podman[196434]: 2025-10-05 09:27:23.923231159 +0000 UTC m=+0.088207236 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:27:23 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:27:24 localhost python3.9[196545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:24 localhost systemd[1]: Reloading. Oct 5 05:27:25 localhost systemd-sysv-generator[196575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:25 localhost systemd-rc-local-generator[196570]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10412 DF PROTO=TCP SPT=44462 DPT=9882 SEQ=1152344816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F095B0000000001030307) Oct 5 05:27:26 localhost python3.9[196730]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:26 localhost systemd[1]: Reloading. Oct 5 05:27:26 localhost systemd-rc-local-generator[196782]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:26 localhost systemd-sysv-generator[196785]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:27 localhost python3.9[196911]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:27 localhost systemd[1]: Reloading. Oct 5 05:27:27 localhost systemd-sysv-generator[196944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:27 localhost systemd-rc-local-generator[196940]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38862 DF PROTO=TCP SPT=37584 DPT=9100 SEQ=230274367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F11760000000001030307) Oct 5 05:27:28 localhost python3.9[197061]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:29 localhost python3.9[197174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:29 localhost systemd[1]: Reloading. Oct 5 05:27:29 localhost systemd-rc-local-generator[197217]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:29 localhost systemd-sysv-generator[197223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:31 localhost python3.9[197340]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:27:31 localhost systemd[1]: Reloading. Oct 5 05:27:31 localhost systemd-rc-local-generator[197368]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:27:31 localhost systemd-sysv-generator[197372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:27:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:27:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7795 DF PROTO=TCP SPT=43786 DPT=9102 SEQ=1144661693 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F20B60000000001030307) Oct 5 05:27:32 localhost python3.9[197489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14898 DF PROTO=TCP SPT=34936 DPT=9101 SEQ=2294916978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F28A20000000001030307) Oct 5 05:27:34 localhost python3.9[197602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:36 localhost python3.9[197715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14900 DF PROTO=TCP SPT=34936 DPT=9101 SEQ=2294916978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F34B60000000001030307) Oct 5 05:27:37 localhost python3.9[197828]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:38 localhost python3.9[197941]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:39 localhost python3.9[198054]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:40 localhost python3.9[198167]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14901 DF PROTO=TCP SPT=34936 DPT=9101 SEQ=2294916978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F44760000000001030307) Oct 5 05:27:41 localhost python3.9[198280]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:42 localhost python3.9[198393]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:44 localhost python3.9[198506]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:44 localhost python3.9[198619]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41910 DF PROTO=TCP SPT=43410 DPT=9105 SEQ=1841911952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F5A4A0000000001030307) Oct 5 05:27:46 localhost python3.9[198732]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:47 localhost python3.9[198845]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41911 DF PROTO=TCP SPT=43410 DPT=9105 SEQ=1841911952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F5E360000000001030307) Oct 5 05:27:49 localhost python3.9[198958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Oct 5 05:27:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41912 DF PROTO=TCP SPT=43410 DPT=9105 SEQ=1841911952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F66360000000001030307) Oct 5 05:27:51 localhost python3.9[199071]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:27:51 localhost podman[199181]: 2025-10-05 09:27:51.928656117 +0000 UTC m=+0.088580886 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:27:51 localhost podman[199181]: 2025-10-05 09:27:51.966786911 +0000 UTC m=+0.126711690 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:27:51 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:27:52 localhost python3.9[199182]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:52 localhost python3.9[199314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:53 localhost python3.9[199424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41913 DF PROTO=TCP SPT=43410 DPT=9105 SEQ=1841911952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F75F60000000001030307) Oct 5 05:27:53 localhost python3.9[199534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:27:54 localhost systemd[1]: tmp-crun.HbMnPh.mount: Deactivated successfully. Oct 5 05:27:54 localhost podman[199645]: 2025-10-05 09:27:54.505817717 +0000 UTC m=+0.092765184 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:27:54 localhost podman[199645]: 2025-10-05 09:27:54.534743443 +0000 UTC m=+0.121690910 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:27:54 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:27:54 localhost python3.9[199644]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:27:55 localhost python3.9[199771]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12977 DF PROTO=TCP SPT=43210 DPT=9882 SEQ=1738350980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F7E8C0000000001030307) Oct 5 05:27:56 localhost python3.9[199861]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656474.8389497-1646-941256190512/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:27:57 localhost python3.9[199971]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:57 localhost python3.9[200061]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656476.5416682-1646-241127631843844/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:27:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48827 DF PROTO=TCP SPT=52936 DPT=9100 SEQ=3191740127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F86B70000000001030307) Oct 5 05:27:58 localhost python3.9[200171]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:58 localhost python3.9[200261]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656477.713341-1646-199182025239122/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:27:59 localhost python3.9[200371]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:27:59 localhost python3.9[200461]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656478.913241-1646-280126821172024/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:01 localhost python3.9[200571]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:01 localhost python3.9[200661]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656480.5732179-1646-22956182889898/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56493 DF PROTO=TCP SPT=40772 DPT=9102 SEQ=202621157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F95B60000000001030307) Oct 5 05:28:02 localhost python3.9[200771]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:03 localhost python3.9[200861]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656481.849413-1646-247904373433987/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:03 localhost python3.9[200971]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36461 DF PROTO=TCP SPT=40532 DPT=9101 SEQ=1721374187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75F9DD30000000001030307) Oct 5 05:28:04 localhost python3.9[201059]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656483.278683-1646-110983331489771/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:04 localhost python3.9[201169]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:05 localhost python3.9[201259]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759656484.4592886-1646-247020172762648/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:06 localhost python3.9[201369]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36463 DF PROTO=TCP SPT=40532 DPT=9101 SEQ=1721374187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FA9F60000000001030307) Oct 5 05:28:07 localhost python3.9[201479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:08 localhost python3.9[201589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:08 localhost python3.9[201699]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:09 localhost python3.9[201809]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:10 localhost python3.9[201919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:10 localhost python3.9[202029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36464 DF PROTO=TCP SPT=40532 DPT=9101 SEQ=1721374187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FB9B60000000001030307) Oct 5 05:28:11 localhost python3.9[202139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:12 localhost python3.9[202249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:13 localhost python3.9[202359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:14 localhost python3.9[202469]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:15 localhost python3.9[202579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:15 localhost python3.9[202689]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:16 localhost python3.9[202799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59445 DF PROTO=TCP SPT=34808 DPT=9105 SEQ=2164755718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FCF7B0000000001030307) Oct 5 05:28:17 localhost python3.9[202909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59446 DF PROTO=TCP SPT=34808 DPT=9105 SEQ=2164755718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FD3770000000001030307) Oct 5 05:28:17 localhost python3.9[203019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:18 localhost python3.9[203107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656497.250621-2308-148050475085254/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:18 localhost python3.9[203217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:19 localhost python3.9[203305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656498.5018227-2308-249403212362126/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59447 DF PROTO=TCP SPT=34808 DPT=9105 SEQ=2164755718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FDB760000000001030307) Oct 5 05:28:20 localhost python3.9[203415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:28:20.361 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:28:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:28:20.362 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:28:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:28:20.362 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:28:20 localhost python3.9[203503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656499.7153895-2308-129622572229256/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:21 localhost python3.9[203613]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:21 localhost python3.9[203701]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656500.8668313-2308-13453354683171/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:28:22 localhost podman[203812]: 2025-10-05 09:28:22.471260405 +0000 UTC m=+0.081207680 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:28:22 localhost podman[203812]: 2025-10-05 09:28:22.53616971 +0000 UTC m=+0.146116945 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:28:22 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:28:22 localhost python3.9[203811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:23 localhost python3.9[203924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656502.0809095-2308-117756548150360/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:23 localhost python3.9[204034]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59448 DF PROTO=TCP SPT=34808 DPT=9105 SEQ=2164755718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FEB360000000001030307) Oct 5 05:28:24 localhost python3.9[204122]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656503.2694194-2308-87065402195918/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:28:24 localhost podman[204163]: 2025-10-05 09:28:24.91500902 +0000 UTC m=+0.083298942 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:28:24 localhost podman[204163]: 2025-10-05 09:28:24.920955991 +0000 UTC m=+0.089245923 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 05:28:24 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:28:25 localhost python3.9[204250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:25 localhost python3.9[204338]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656504.7941613-2308-85681262198281/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37240 DF PROTO=TCP SPT=46474 DPT=9882 SEQ=3586219300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FF3BC0000000001030307) Oct 5 05:28:26 localhost python3.9[204448]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:27 localhost python3.9[204536]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656506.0293598-2308-139045453906311/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30373 DF PROTO=TCP SPT=40862 DPT=9100 SEQ=2394803759 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC75FFBF70000000001030307) Oct 5 05:28:28 localhost python3.9[204646]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:29 localhost python3.9[204734]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656508.0971453-2308-262605502709179/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:29 localhost python3.9[204844]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:30 localhost python3.9[204968]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656509.1833887-2308-236945906054299/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:30 localhost python3.9[205135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:31 localhost python3.9[205253]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656510.398439-2308-7500945390220/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34272 DF PROTO=TCP SPT=45922 DPT=9102 SEQ=3000084335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7600AF60000000001030307) Oct 5 05:28:32 localhost python3.9[205381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:32 localhost python3.9[205469]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656511.6419392-2308-39719748195412/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:33 localhost python3.9[205579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:33 localhost python3.9[205667]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656512.796186-2308-105049344461929/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45354 DF PROTO=TCP SPT=52468 DPT=9101 SEQ=4041512272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76013030000000001030307) Oct 5 05:28:34 localhost python3.9[205777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:35 localhost python3.9[205865]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656513.9832911-2308-244508225869624/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:35 localhost python3.9[205973]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:28:36 localhost python3.9[206086]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Oct 5 05:28:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45356 DF PROTO=TCP SPT=52468 DPT=9101 SEQ=4041512272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7601EF60000000001030307) Oct 5 05:28:37 localhost python3.9[206196]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:37 localhost systemd[1]: Reloading. Oct 5 05:28:37 localhost systemd-rc-local-generator[206222]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:37 localhost systemd-sysv-generator[206226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:38 localhost systemd[1]: Starting libvirt logging daemon socket... Oct 5 05:28:38 localhost systemd[1]: Listening on libvirt logging daemon socket. Oct 5 05:28:38 localhost systemd[1]: Starting libvirt logging daemon admin socket... Oct 5 05:28:38 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Oct 5 05:28:38 localhost systemd[1]: Starting libvirt logging daemon... Oct 5 05:28:38 localhost systemd[1]: Started libvirt logging daemon. Oct 5 05:28:39 localhost python3.9[206348]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:39 localhost systemd[1]: Reloading. Oct 5 05:28:39 localhost systemd-sysv-generator[206375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:39 localhost systemd-rc-local-generator[206370]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon socket... Oct 5 05:28:39 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Oct 5 05:28:39 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Oct 5 05:28:39 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Oct 5 05:28:39 localhost systemd[1]: Starting libvirt nodedev daemon... Oct 5 05:28:39 localhost systemd[1]: Started libvirt nodedev daemon. Oct 5 05:28:40 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Oct 5 05:28:40 localhost python3.9[206522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:40 localhost systemd[1]: Reloading. Oct 5 05:28:40 localhost systemd-rc-local-generator[206545]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:40 localhost systemd-sysv-generator[206550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:40 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Oct 5 05:28:40 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Oct 5 05:28:40 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon socket... Oct 5 05:28:40 localhost systemd[1]: Listening on libvirt proxy daemon socket. Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Oct 5 05:28:40 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Oct 5 05:28:40 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Oct 5 05:28:40 localhost systemd[1]: Starting libvirt proxy daemon... Oct 5 05:28:40 localhost systemd[1]: Started libvirt proxy daemon. Oct 5 05:28:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45357 DF PROTO=TCP SPT=52468 DPT=9101 SEQ=4041512272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7602EB60000000001030307) Oct 5 05:28:41 localhost python3.9[206701]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:41 localhost systemd[1]: Reloading. Oct 5 05:28:41 localhost systemd-rc-local-generator[206723]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:41 localhost systemd-sysv-generator[206726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:41 localhost setroubleshoot[206523]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 78e2a63d-0480-40f3-b3df-a15b32d88bd5 Oct 5 05:28:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:41 localhost setroubleshoot[206523]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Oct 5 05:28:41 localhost setroubleshoot[206523]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 78e2a63d-0480-40f3-b3df-a15b32d88bd5 Oct 5 05:28:41 localhost setroubleshoot[206523]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Oct 5 05:28:41 localhost systemd[1]: Listening on libvirt locking daemon socket. Oct 5 05:28:41 localhost systemd[1]: Starting libvirt QEMU daemon socket... Oct 5 05:28:41 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 5 05:28:41 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Oct 5 05:28:41 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Oct 5 05:28:41 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Oct 5 05:28:41 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Oct 5 05:28:41 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Oct 5 05:28:41 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Oct 5 05:28:41 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Oct 5 05:28:41 localhost systemd[1]: Starting libvirt QEMU daemon... Oct 5 05:28:41 localhost systemd[1]: Started libvirt QEMU daemon. Oct 5 05:28:42 localhost python3.9[206874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:28:42 localhost systemd[1]: Reloading. Oct 5 05:28:42 localhost systemd-rc-local-generator[206896]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:28:42 localhost systemd-sysv-generator[206900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:28:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:28:42 localhost systemd[1]: Starting libvirt secret daemon socket... Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt secret daemon socket. Oct 5 05:28:42 localhost systemd[1]: Starting libvirt secret daemon admin socket... Oct 5 05:28:42 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Oct 5 05:28:42 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Oct 5 05:28:42 localhost systemd[1]: Starting libvirt secret daemon... Oct 5 05:28:42 localhost systemd[1]: Started libvirt secret daemon. Oct 5 05:28:44 localhost python3.9[207043]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:45 localhost python3.9[207153]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:28:46 localhost python3.9[207263]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:28:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18953 DF PROTO=TCP SPT=36424 DPT=9105 SEQ=1315050821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76044AB0000000001030307) Oct 5 05:28:47 localhost python3.9[207375]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:28:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18954 DF PROTO=TCP SPT=36424 DPT=9105 SEQ=1315050821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76048B60000000001030307) Oct 5 05:28:47 localhost python3.9[207483]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:49 localhost python3.9[207569]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656527.5046828-3173-175593835002014/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c0cd5a488d0709b14bfd915c93171010d2c54076 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18955 DF PROTO=TCP SPT=36424 DPT=9105 SEQ=1315050821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76050B60000000001030307) Oct 5 05:28:49 localhost python3.9[207679]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 659062ac-50b4-5607-b699-3105da7f55ee#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:28:50 localhost python3.9[207799]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:51 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Oct 5 05:28:51 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Oct 5 05:28:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:28:52 localhost podman[208138]: 2025-10-05 09:28:52.899411774 +0000 UTC m=+0.084871373 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:28:52 localhost podman[208138]: 2025-10-05 09:28:52.999132161 +0000 UTC m=+0.184591760 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:28:53 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:28:53 localhost python3.9[208137]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:53 localhost python3.9[208271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18956 DF PROTO=TCP SPT=36424 DPT=9105 SEQ=1315050821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76060760000000001030307) Oct 5 05:28:54 localhost python3.9[208359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656533.2007918-3338-4794198227861/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:54 localhost python3.9[208469]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:28:55 localhost systemd[1]: tmp-crun.kWffVS.mount: Deactivated successfully. Oct 5 05:28:55 localhost podman[208579]: 2025-10-05 09:28:55.669882673 +0000 UTC m=+0.096151359 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 05:28:55 localhost podman[208579]: 2025-10-05 09:28:55.678154153 +0000 UTC m=+0.104422889 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:28:55 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:28:55 localhost python3.9[208580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51948 DF PROTO=TCP SPT=53114 DPT=9882 SEQ=3743593858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76068EB0000000001030307) Oct 5 05:28:56 localhost python3.9[208654]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:56 localhost python3.9[208764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:57 localhost python3.9[208821]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.z4ixr9fq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23531 DF PROTO=TCP SPT=44764 DPT=9100 SEQ=2962092022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76070F60000000001030307) Oct 5 05:28:58 localhost python3.9[208931]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:28:58 localhost python3.9[208988]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:28:59 localhost python3.9[209098]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:00 localhost python3[209209]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:29:00 localhost python3.9[209319]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:01 localhost python3.9[209376]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11559 DF PROTO=TCP SPT=57970 DPT=9102 SEQ=2895494863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76080370000000001030307) Oct 5 05:29:02 localhost python3.9[209486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:02 localhost python3.9[209543]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:03 localhost python3.9[209653]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:03 localhost python3.9[209710]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18200 DF PROTO=TCP SPT=36334 DPT=9101 SEQ=1273741929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76088340000000001030307) Oct 5 05:29:04 localhost python3.9[209820]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:04 localhost python3.9[209877]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:05 localhost python3.9[209987]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:06 localhost python3.9[210077]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759656545.2124414-3712-164124787509935/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:07 localhost python3.9[210187]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18202 DF PROTO=TCP SPT=36334 DPT=9101 SEQ=1273741929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76094360000000001030307) Oct 5 05:29:07 localhost python3.9[210297]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:09 localhost python3.9[210410]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:10 localhost python3.9[210520]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18203 DF PROTO=TCP SPT=36334 DPT=9101 SEQ=1273741929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760A3F60000000001030307) Oct 5 05:29:11 localhost python3.9[210631]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:11 localhost python3.9[210743]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:29:12 localhost python3.9[210856]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:13 localhost python3.9[210966]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:13 localhost python3.9[211054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656552.8459804-3929-177071598548548/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:14 localhost python3.9[211164]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:15 localhost python3.9[211252]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656554.1553035-3974-129199541721019/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:15 localhost python3.9[211362]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:16 localhost python3.9[211450]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656555.3601904-4020-115553092210600/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1001 DF PROTO=TCP SPT=43508 DPT=9105 SEQ=3699340784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760B9DA0000000001030307) Oct 5 05:29:17 localhost python3.9[211560]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:17 localhost systemd[1]: Reloading. Oct 5 05:29:17 localhost systemd-rc-local-generator[211584]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:17 localhost systemd-sysv-generator[211590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:17 localhost systemd[1]: Reached target edpm_libvirt.target. Oct 5 05:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1002 DF PROTO=TCP SPT=43508 DPT=9105 SEQ=3699340784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760BDF60000000001030307) Oct 5 05:29:18 localhost python3.9[211710]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Oct 5 05:29:18 localhost systemd[1]: Reloading. Oct 5 05:29:18 localhost systemd-rc-local-generator[211736]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:18 localhost systemd-sysv-generator[211739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:18 localhost systemd[1]: Reloading. Oct 5 05:29:18 localhost systemd-rc-local-generator[211776]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:18 localhost systemd-sysv-generator[211779]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:19 localhost systemd[1]: session-54.scope: Deactivated successfully. Oct 5 05:29:19 localhost systemd[1]: session-54.scope: Consumed 3min 38.680s CPU time. Oct 5 05:29:19 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Oct 5 05:29:19 localhost systemd-logind[760]: Removed session 54. Oct 5 05:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1003 DF PROTO=TCP SPT=43508 DPT=9105 SEQ=3699340784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760C5F60000000001030307) Oct 5 05:29:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:29:20.363 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:29:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:29:20.365 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:29:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:29:20.365 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:29:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1004 DF PROTO=TCP SPT=43508 DPT=9105 SEQ=3699340784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760D5B60000000001030307) Oct 5 05:29:23 localhost podman[211802]: 2025-10-05 09:29:23.901563611 +0000 UTC m=+0.069613094 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:29:23 localhost podman[211802]: 2025-10-05 09:29:23.940023183 +0000 UTC m=+0.108072676 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Oct 5 05:29:23 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:29:25 localhost sshd[211828]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:29:25 localhost systemd-logind[760]: New session 55 of user zuul. Oct 5 05:29:25 localhost systemd[1]: Started Session 55 of User zuul. Oct 5 05:29:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:29:25 localhost systemd[1]: tmp-crun.B2M6NL.mount: Deactivated successfully. Oct 5 05:29:25 localhost podman[211881]: 2025-10-05 09:29:25.929665009 +0000 UTC m=+0.087407763 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent) Oct 5 05:29:25 localhost podman[211881]: 2025-10-05 09:29:25.960088566 +0000 UTC m=+0.117831320 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:29:25 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:29:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14283 DF PROTO=TCP SPT=50486 DPT=9882 SEQ=2451926456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760DE1C0000000001030307) Oct 5 05:29:26 localhost python3.9[211957]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:29:27 localhost python3.9[212071]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22835 DF PROTO=TCP SPT=41784 DPT=9100 SEQ=2410275881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760E6360000000001030307) Oct 5 05:29:28 localhost python3.9[212181]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:28 localhost python3.9[212291]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:29 localhost python3.9[212401]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:29:30 localhost python3.9[212511]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:31 localhost python3.9[212621]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37658 DF PROTO=TCP SPT=35556 DPT=9102 SEQ=3878501351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760F5760000000001030307) Oct 5 05:29:32 localhost python3.9[212769]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:32 localhost systemd[1]: Reloading. Oct 5 05:29:32 localhost systemd-sysv-generator[212817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:32 localhost systemd-rc-local-generator[212814]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52236 DF PROTO=TCP SPT=50798 DPT=9101 SEQ=3420125674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC760FD630000000001030307) Oct 5 05:29:35 localhost python3.9[212966]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:29:35 localhost network[212983]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:29:35 localhost network[212984]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:29:35 localhost network[212985]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:29:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52238 DF PROTO=TCP SPT=50798 DPT=9101 SEQ=3420125674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76109770000000001030307) Oct 5 05:29:39 localhost python3.9[213217]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:40 localhost systemd[1]: Reloading. Oct 5 05:29:40 localhost systemd-rc-local-generator[213244]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:40 localhost systemd-sysv-generator[213249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:41 localhost python3.9[213363]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52239 DF PROTO=TCP SPT=50798 DPT=9101 SEQ=3420125674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76119360000000001030307) Oct 5 05:29:41 localhost python3.9[213473]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:29:42 localhost python3.9[213585]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:43 localhost python3.9[213695]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:44 localhost python3.9[213805]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:45 localhost python3.9[213862]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:46 localhost python3.9[213972]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:46 localhost python3.9[214029]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5006 DF PROTO=TCP SPT=51136 DPT=9105 SEQ=2350317305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7612F0B0000000001030307) Oct 5 05:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5007 DF PROTO=TCP SPT=51136 DPT=9105 SEQ=2350317305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76132F60000000001030307) Oct 5 05:29:47 localhost python3.9[214139]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:48 localhost python3.9[214249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:48 localhost python3.9[214306]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:49 localhost python3.9[214416]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5008 DF PROTO=TCP SPT=51136 DPT=9105 SEQ=2350317305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7613AF60000000001030307) Oct 5 05:29:50 localhost python3.9[214473]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:51 localhost python3.9[214583]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:51 localhost systemd[1]: Reloading. Oct 5 05:29:51 localhost systemd-rc-local-generator[214609]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:51 localhost systemd-sysv-generator[214613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:52 localhost python3.9[214731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:52 localhost python3.9[214788]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:53 localhost python3.9[214898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5009 DF PROTO=TCP SPT=51136 DPT=9105 SEQ=2350317305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7614AB60000000001030307) Oct 5 05:29:53 localhost python3.9[214955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:29:54 localhost systemd[1]: tmp-crun.gsrAWL.mount: Deactivated successfully. Oct 5 05:29:54 localhost podman[215066]: 2025-10-05 09:29:54.509229136 +0000 UTC m=+0.100159453 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:29:54 localhost podman[215066]: 2025-10-05 09:29:54.581048732 +0000 UTC m=+0.171979089 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 5 05:29:54 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:29:54 localhost python3.9[215065]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:29:54 localhost systemd[1]: Reloading. Oct 5 05:29:54 localhost systemd-rc-local-generator[215115]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:29:54 localhost systemd-sysv-generator[215119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:29:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:29:55 localhost systemd[1]: Starting Create netns directory... Oct 5 05:29:55 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:29:55 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:29:55 localhost systemd[1]: Finished Create netns directory. Oct 5 05:29:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19134 DF PROTO=TCP SPT=54264 DPT=9882 SEQ=389739378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761534C0000000001030307) Oct 5 05:29:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:29:56 localhost systemd[1]: tmp-crun.ScsNmU.mount: Deactivated successfully. Oct 5 05:29:56 localhost podman[215245]: 2025-10-05 09:29:56.897067798 +0000 UTC m=+0.101798828 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:29:56 localhost podman[215245]: 2025-10-05 09:29:56.928197796 +0000 UTC m=+0.132928886 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:29:56 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:29:57 localhost python3.9[215244]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:29:57 localhost python3.9[215372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:29:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52388 DF PROTO=TCP SPT=57554 DPT=9100 SEQ=3160684552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7615B760000000001030307) Oct 5 05:29:59 localhost python3.9[215460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656597.2350883-698-231438636106169/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:00 localhost python3.9[215570]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:01 localhost python3.9[215680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29884 DF PROTO=TCP SPT=49430 DPT=9102 SEQ=1700529683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7616A770000000001030307) Oct 5 05:30:02 localhost python3.9[215770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656600.9295673-773-39727353668485/.source.json _original_basename=.g08u3rd_ follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:02 localhost python3.9[215880]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42766 DF PROTO=TCP SPT=48430 DPT=9101 SEQ=789144014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76172930000000001030307) Oct 5 05:30:05 localhost python3.9[216188]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False Oct 5 05:30:06 localhost python3.9[216298]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:30:06 localhost python3.9[216408]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:30:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42768 DF PROTO=TCP SPT=48430 DPT=9101 SEQ=789144014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7617EB60000000001030307) Oct 5 05:30:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42769 DF PROTO=TCP SPT=48430 DPT=9101 SEQ=789144014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7618E760000000001030307) Oct 5 05:30:11 localhost python3[216545]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:30:13 localhost podman[216559]: 2025-10-05 09:30:11.48483002 +0000 UTC m=+0.047548262 image pull quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 5 05:30:13 localhost python3[216545]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "777353c8928aa59ae2473c1d38acf1eefa9a0dfeca7b821fed936f9ff9383648",#012 "Digest": "sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:14:31.883735142Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 403870347,#012 "VirtualSize": 403870347,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:5517f28613540e56901977cf7926b9c77e610f33e0d02e83afbce9137bbc7d2a"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which Oct 5 05:30:13 localhost podman[216620]: 2025-10-05 09:30:13.561923116 +0000 UTC m=+0.085257476 container remove 6655e8db142cc32c5173ed4833151e35f96aff5c5d3145f85a59f81fd3277097 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, vcs-type=git, io.openshift.expose-services=) Oct 5 05:30:13 localhost python3[216545]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force iscsid Oct 5 05:30:13 localhost podman[216634]: Oct 5 05:30:13 localhost podman[216634]: 2025-10-05 09:30:13.652548376 +0000 UTC m=+0.073859176 container create 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 05:30:13 localhost podman[216634]: 2025-10-05 09:30:13.622328005 +0000 UTC m=+0.043638835 image pull quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 5 05:30:13 localhost python3[216545]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified Oct 5 05:30:14 localhost python3.9[216781]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:15 localhost python3.9[216893]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:15 localhost python3.9[216948]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:16 localhost python3.9[217057]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656616.0455978-1036-170021650513413/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42277 DF PROTO=TCP SPT=34952 DPT=9105 SEQ=3831920360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761A43A0000000001030307) Oct 5 05:30:17 localhost python3.9[217112]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:30:17 localhost systemd[1]: Reloading. Oct 5 05:30:17 localhost systemd-rc-local-generator[217136]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:17 localhost systemd-sysv-generator[217142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42278 DF PROTO=TCP SPT=34952 DPT=9105 SEQ=3831920360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761A8360000000001030307) Oct 5 05:30:18 localhost python3.9[217203]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:30:18 localhost systemd[1]: Reloading. Oct 5 05:30:18 localhost systemd-rc-local-generator[217228]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:18 localhost systemd-sysv-generator[217232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:18 localhost systemd[1]: Starting iscsid container... Oct 5 05:30:18 localhost systemd[1]: Started libcrun container. Oct 5 05:30:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5173e5f395dd54856819bdabb620f2befcebd20cb09d8886a7d6a40aaadc39/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:30:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5173e5f395dd54856819bdabb620f2befcebd20cb09d8886a7d6a40aaadc39/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 5 05:30:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5173e5f395dd54856819bdabb620f2befcebd20cb09d8886a7d6a40aaadc39/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:30:18 localhost podman[217244]: 2025-10-05 09:30:18.749594133 +0000 UTC m=+0.150994150 container init 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:30:18 localhost iscsid[217258]: + sudo -E kolla_set_configs Oct 5 05:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:30:18 localhost podman[217244]: 2025-10-05 09:30:18.787178093 +0000 UTC m=+0.188578030 container start 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible) Oct 5 05:30:18 localhost podman[217244]: iscsid Oct 5 05:30:18 localhost systemd[1]: Started iscsid container. Oct 5 05:30:18 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 05:30:18 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 05:30:18 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 05:30:18 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 05:30:18 localhost podman[217266]: 2025-10-05 09:30:18.904372464 +0000 UTC m=+0.107864299 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid) Oct 5 05:30:18 localhost podman[217266]: 2025-10-05 09:30:18.909834723 +0000 UTC m=+0.113326548 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:30:18 localhost podman[217266]: unhealthy Oct 5 05:30:18 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:30:18 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Failed with result 'exit-code'. Oct 5 05:30:19 localhost systemd[217276]: Queued start job for default target Main User Target. Oct 5 05:30:19 localhost systemd[217276]: Created slice User Application Slice. Oct 5 05:30:19 localhost systemd[217276]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 05:30:19 localhost systemd[217276]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:30:19 localhost systemd[217276]: Reached target Paths. Oct 5 05:30:19 localhost systemd[217276]: Reached target Timers. Oct 5 05:30:19 localhost systemd[217276]: Starting D-Bus User Message Bus Socket... Oct 5 05:30:19 localhost systemd[217276]: Starting Create User's Volatile Files and Directories... Oct 5 05:30:19 localhost systemd[217276]: Listening on D-Bus User Message Bus Socket. Oct 5 05:30:19 localhost systemd[217276]: Reached target Sockets. Oct 5 05:30:19 localhost systemd[217276]: Finished Create User's Volatile Files and Directories. Oct 5 05:30:19 localhost systemd[217276]: Reached target Basic System. Oct 5 05:30:19 localhost systemd[217276]: Reached target Main User Target. Oct 5 05:30:19 localhost systemd[217276]: Startup finished in 121ms. Oct 5 05:30:19 localhost systemd[1]: Started User Manager for UID 0. Oct 5 05:30:19 localhost systemd[1]: Started Session c13 of User root. Oct 5 05:30:19 localhost iscsid[217258]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:30:19 localhost iscsid[217258]: INFO:__main__:Validating config file Oct 5 05:30:19 localhost iscsid[217258]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:30:19 localhost iscsid[217258]: INFO:__main__:Writing out command to execute Oct 5 05:30:19 localhost systemd[1]: session-c13.scope: Deactivated successfully. Oct 5 05:30:19 localhost iscsid[217258]: ++ cat /run_command Oct 5 05:30:19 localhost iscsid[217258]: + CMD='/usr/sbin/iscsid -f' Oct 5 05:30:19 localhost iscsid[217258]: + ARGS= Oct 5 05:30:19 localhost iscsid[217258]: + sudo kolla_copy_cacerts Oct 5 05:30:19 localhost systemd[1]: Started Session c14 of User root. Oct 5 05:30:19 localhost systemd[1]: session-c14.scope: Deactivated successfully. Oct 5 05:30:19 localhost iscsid[217258]: + [[ ! -n '' ]] Oct 5 05:30:19 localhost iscsid[217258]: + . kolla_extend_start Oct 5 05:30:19 localhost iscsid[217258]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]] Oct 5 05:30:19 localhost iscsid[217258]: Running command: '/usr/sbin/iscsid -f' Oct 5 05:30:19 localhost iscsid[217258]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\''' Oct 5 05:30:19 localhost iscsid[217258]: + umask 0022 Oct 5 05:30:19 localhost iscsid[217258]: + exec /usr/sbin/iscsid -f Oct 5 05:30:19 localhost python3.9[217414]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42279 DF PROTO=TCP SPT=34952 DPT=9105 SEQ=3831920360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761B0360000000001030307) Oct 5 05:30:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:30:20.365 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:30:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:30:20.365 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:30:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:30:20.366 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:30:20 localhost python3.9[217524]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:21 localhost python3.9[217634]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:30:21 localhost network[217651]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:30:21 localhost network[217652]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:30:21 localhost network[217653]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:30:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42280 DF PROTO=TCP SPT=34952 DPT=9105 SEQ=3831920360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761BFF60000000001030307) Oct 5 05:30:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:30:24 localhost podman[217794]: 2025-10-05 09:30:24.94232162 +0000 UTC m=+0.085414665 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:30:25 localhost podman[217794]: 2025-10-05 09:30:25.002902452 +0000 UTC m=+0.145995447 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:30:25 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:30:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31631 DF PROTO=TCP SPT=53464 DPT=9882 SEQ=1550057340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761C87C0000000001030307) Oct 5 05:30:26 localhost python3.9[217912]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:30:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:30:27 localhost podman[218023]: 2025-10-05 09:30:27.299962136 +0000 UTC m=+0.085239531 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:30:27 localhost podman[218023]: 2025-10-05 09:30:27.309509069 +0000 UTC m=+0.094786464 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Oct 5 05:30:27 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:30:27 localhost python3.9[218022]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Oct 5 05:30:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26302 DF PROTO=TCP SPT=57628 DPT=9100 SEQ=980630961 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761D0B60000000001030307) Oct 5 05:30:28 localhost python3.9[218152]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:28 localhost python3.9[218240]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656627.7206726-1259-202995924565352/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:29 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:30:29 localhost systemd[217276]: Activating special unit Exit the Session... Oct 5 05:30:29 localhost systemd[217276]: Stopped target Main User Target. Oct 5 05:30:29 localhost systemd[217276]: Stopped target Basic System. Oct 5 05:30:29 localhost systemd[217276]: Stopped target Paths. Oct 5 05:30:29 localhost systemd[217276]: Stopped target Sockets. Oct 5 05:30:29 localhost systemd[217276]: Stopped target Timers. Oct 5 05:30:29 localhost systemd[217276]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:30:29 localhost systemd[217276]: Closed D-Bus User Message Bus Socket. Oct 5 05:30:29 localhost systemd[217276]: Stopped Create User's Volatile Files and Directories. Oct 5 05:30:29 localhost systemd[217276]: Removed slice User Application Slice. Oct 5 05:30:29 localhost systemd[217276]: Reached target Shutdown. Oct 5 05:30:29 localhost systemd[217276]: Finished Exit the Session. Oct 5 05:30:29 localhost systemd[217276]: Reached target Exit the Session. Oct 5 05:30:29 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:30:29 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:30:29 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:30:29 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:30:29 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:30:29 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:30:29 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:30:29 localhost python3.9[218351]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:30 localhost python3.9[218461]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:30:30 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 05:30:30 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 05:30:30 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 05:30:30 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 05:30:30 localhost systemd-modules-load[218465]: Module 'msr' is built in Oct 5 05:30:30 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 05:30:31 localhost python3.9[218575]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30408 DF PROTO=TCP SPT=32838 DPT=9102 SEQ=2889258652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761DFB60000000001030307) Oct 5 05:30:32 localhost python3.9[218685]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:33 localhost python3.9[218795]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:33 localhost python3.9[218905]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32916 DF PROTO=TCP SPT=45538 DPT=9101 SEQ=3536841783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761E7C50000000001030307) Oct 5 05:30:34 localhost python3.9[219031]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656633.3205926-1433-156149909705143/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:34 localhost podman[219119]: 2025-10-05 09:30:34.742905104 +0000 UTC m=+0.091565732 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, architecture=x86_64, distribution-scope=public, release=553, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:30:34 localhost podman[219119]: 2025-10-05 09:30:34.848248624 +0000 UTC m=+0.196909282 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:30:35 localhost python3.9[219296]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:30:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32918 DF PROTO=TCP SPT=45538 DPT=9101 SEQ=3536841783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC761F3B70000000001030307) Oct 5 05:30:37 localhost python3.9[219475]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:38 localhost python3.9[219585]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:38 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Oct 5 05:30:38 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:30:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:30:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:30:39 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Oct 5 05:30:39 localhost python3.9[219696]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:40 localhost python3.9[219807]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:40 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Oct 5 05:30:40 localhost python3.9[219917]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32919 DF PROTO=TCP SPT=45538 DPT=9101 SEQ=3536841783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76203760000000001030307) Oct 5 05:30:41 localhost python3.9[220028]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:42 localhost python3.9[220138]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:43 localhost python3.9[220248]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:30:43 localhost python3.9[220360]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:44 localhost python3.9[220470]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:45 localhost python3.9[220580]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:45 localhost python3.9[220637]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:46 localhost python3.9[220747]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30484 DF PROTO=TCP SPT=49884 DPT=9105 SEQ=844746988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762196A0000000001030307) Oct 5 05:30:46 localhost python3.9[220804]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:47 localhost python3.9[220914]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30485 DF PROTO=TCP SPT=49884 DPT=9105 SEQ=844746988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7621D770000000001030307) Oct 5 05:30:48 localhost python3.9[221024]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:30:49 localhost podman[221082]: 2025-10-05 09:30:49.353176784 +0000 UTC m=+0.084249595 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.build-date=20251001) Oct 5 05:30:49 localhost podman[221082]: 2025-10-05 09:30:49.366145945 +0000 UTC m=+0.097218776 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:30:49 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:30:49 localhost python3.9[221081]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30486 DF PROTO=TCP SPT=49884 DPT=9105 SEQ=844746988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76225760000000001030307) Oct 5 05:30:50 localhost python3.9[221208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:51 localhost python3.9[221265]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:52 localhost systemd[1]: virtqemud.service: Deactivated successfully. Oct 5 05:30:52 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 5 05:30:53 localhost python3.9[221377]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:30:53 localhost systemd[1]: Reloading. Oct 5 05:30:53 localhost systemd-rc-local-generator[221399]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:53 localhost systemd-sysv-generator[221403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30487 DF PROTO=TCP SPT=49884 DPT=9105 SEQ=844746988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76235360000000001030307) Oct 5 05:30:54 localhost python3.9[221525]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:54 localhost python3.9[221582]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:30:55 localhost podman[221692]: 2025-10-05 09:30:55.259018579 +0000 UTC m=+0.092697420 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller) Oct 5 05:30:55 localhost podman[221692]: 2025-10-05 09:30:55.296182886 +0000 UTC m=+0.129861737 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:30:55 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:30:55 localhost python3.9[221693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:55 localhost python3.9[221774]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:30:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63126 DF PROTO=TCP SPT=56310 DPT=9882 SEQ=1918488410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7623DAC0000000001030307) Oct 5 05:30:56 localhost python3.9[221884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:30:56 localhost systemd[1]: Reloading. Oct 5 05:30:56 localhost systemd-sysv-generator[221912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:30:56 localhost systemd-rc-local-generator[221907]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:30:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:30:57 localhost systemd[1]: Starting Create netns directory... Oct 5 05:30:57 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:30:57 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:30:57 localhost systemd[1]: Finished Create netns directory. Oct 5 05:30:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:30:57 localhost podman[222037]: 2025-10-05 09:30:57.903150507 +0000 UTC m=+0.086414100 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:30:57 localhost podman[222037]: 2025-10-05 09:30:57.936171928 +0000 UTC m=+0.119435511 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:30:57 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:30:58 localhost python3.9[222036]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:30:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55751 DF PROTO=TCP SPT=50498 DPT=9100 SEQ=3409932152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76245B70000000001030307) Oct 5 05:30:59 localhost python3.9[222164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:30:59 localhost python3.9[222252]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656658.2262874-2054-64571703206659/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:31:01 localhost python3.9[222362]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:31:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18749 DF PROTO=TCP SPT=33402 DPT=9102 SEQ=2381695005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76254F70000000001030307) Oct 5 05:31:02 localhost python3.9[222472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:31:02 localhost python3.9[222560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656661.7546165-2128-256445831709480/.source.json _original_basename=.1dgnqy7l follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:03 localhost python3.9[222670]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18750 DF PROTO=TCP SPT=33402 DPT=9102 SEQ=2381695005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7625CF60000000001030307) Oct 5 05:31:05 localhost python3.9[222978]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Oct 5 05:31:06 localhost python3.9[223088]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38173 DF PROTO=TCP SPT=59308 DPT=9101 SEQ=1452087204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76268F60000000001030307) Oct 5 05:31:07 localhost python3.9[223198]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:31:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38174 DF PROTO=TCP SPT=59308 DPT=9101 SEQ=1452087204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76278B60000000001030307) Oct 5 05:31:12 localhost python3[223334]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:31:13 localhost podman[223347]: 2025-10-05 09:31:12.229341545 +0000 UTC m=+0.043779674 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 5 05:31:13 localhost podman[223396]: Oct 5 05:31:13 localhost podman[223396]: 2025-10-05 09:31:13.945147397 +0000 UTC m=+0.060220465 container create 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=multipathd) Oct 5 05:31:13 localhost podman[223396]: 2025-10-05 09:31:13.911944191 +0000 UTC m=+0.027017279 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 5 05:31:13 localhost python3[223334]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Oct 5 05:31:14 localhost python3.9[223545]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:31:15 localhost python3.9[223657]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:16 localhost python3.9[223712]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:31:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62497 DF PROTO=TCP SPT=52022 DPT=9105 SEQ=3547452318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7628E9B0000000001030307) Oct 5 05:31:16 localhost python3.9[223821]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656676.282222-2392-251115487928019/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:17 localhost python3.9[223876]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:31:17 localhost systemd[1]: Reloading. Oct 5 05:31:17 localhost systemd-sysv-generator[223903]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:17 localhost systemd-rc-local-generator[223898]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62498 DF PROTO=TCP SPT=52022 DPT=9105 SEQ=3547452318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76292B70000000001030307) Oct 5 05:31:18 localhost python3.9[223967]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:18 localhost systemd[1]: Reloading. Oct 5 05:31:18 localhost systemd-rc-local-generator[223991]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:18 localhost systemd-sysv-generator[223994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:18 localhost systemd[1]: Starting multipathd container... Oct 5 05:31:18 localhost systemd[1]: Started libcrun container. Oct 5 05:31:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745d177b6ec964267276a1dd5886744a31f62e38d9152b0018ab01837772c364/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745d177b6ec964267276a1dd5886744a31f62e38d9152b0018ab01837772c364/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:31:18 localhost podman[224008]: 2025-10-05 09:31:18.949114156 +0000 UTC m=+0.145506065 container init 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2) Oct 5 05:31:18 localhost multipathd[224021]: + sudo -E kolla_set_configs Oct 5 05:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:31:18 localhost podman[224008]: 2025-10-05 09:31:18.984924257 +0000 UTC m=+0.181316126 container start 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:31:18 localhost podman[224008]: multipathd Oct 5 05:31:18 localhost systemd[1]: Started multipathd container. Oct 5 05:31:19 localhost multipathd[224021]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:31:19 localhost multipathd[224021]: INFO:__main__:Validating config file Oct 5 05:31:19 localhost multipathd[224021]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:31:19 localhost multipathd[224021]: INFO:__main__:Writing out command to execute Oct 5 05:31:19 localhost multipathd[224021]: ++ cat /run_command Oct 5 05:31:19 localhost multipathd[224021]: + CMD='/usr/sbin/multipathd -d' Oct 5 05:31:19 localhost multipathd[224021]: + ARGS= Oct 5 05:31:19 localhost multipathd[224021]: + sudo kolla_copy_cacerts Oct 5 05:31:19 localhost multipathd[224021]: + [[ ! -n '' ]] Oct 5 05:31:19 localhost multipathd[224021]: + . kolla_extend_start Oct 5 05:31:19 localhost multipathd[224021]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Oct 5 05:31:19 localhost multipathd[224021]: Running command: '/usr/sbin/multipathd -d' Oct 5 05:31:19 localhost multipathd[224021]: + umask 0022 Oct 5 05:31:19 localhost multipathd[224021]: + exec /usr/sbin/multipathd -d Oct 5 05:31:19 localhost multipathd[224021]: 10155.213793 | --------start up-------- Oct 5 05:31:19 localhost multipathd[224021]: 10155.213846 | read /etc/multipath.conf Oct 5 05:31:19 localhost multipathd[224021]: 10155.218178 | path checkers start up Oct 5 05:31:19 localhost podman[224029]: 2025-10-05 09:31:19.082217943 +0000 UTC m=+0.091562231 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:31:19 localhost podman[224029]: 2025-10-05 09:31:19.09626336 +0000 UTC m=+0.105607648 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:31:19 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:31:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:31:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62499 DF PROTO=TCP SPT=52022 DPT=9105 SEQ=3547452318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7629AB70000000001030307) Oct 5 05:31:19 localhost podman[224168]: 2025-10-05 09:31:19.909486869 +0000 UTC m=+0.078651303 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2) Oct 5 05:31:19 localhost python3.9[224167]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:31:19 localhost podman[224168]: 2025-10-05 09:31:19.942005136 +0000 UTC m=+0.111169580 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid) Oct 5 05:31:19 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:31:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:31:20.365 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:31:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:31:20.366 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:31:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:31:20.366 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:31:20 localhost python3.9[224298]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:31:22 localhost python3.9[224421]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:31:22 localhost systemd[1]: Stopping multipathd container... Oct 5 05:31:22 localhost multipathd[224021]: 10158.374566 | exit (signal) Oct 5 05:31:22 localhost multipathd[224021]: 10158.374647 | --------shut down------- Oct 5 05:31:22 localhost systemd[1]: libpod-508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.scope: Deactivated successfully. Oct 5 05:31:22 localhost podman[224425]: 2025-10-05 09:31:22.266461859 +0000 UTC m=+0.085360143 container died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:31:22 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.timer: Deactivated successfully. Oct 5 05:31:22 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:31:22 localhost systemd[1]: tmp-crun.r7gPAq.mount: Deactivated successfully. Oct 5 05:31:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f-userdata-shm.mount: Deactivated successfully. Oct 5 05:31:22 localhost podman[224425]: 2025-10-05 09:31:22.487244488 +0000 UTC m=+0.306142732 container cleanup 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:31:22 localhost podman[224425]: multipathd Oct 5 05:31:22 localhost podman[224453]: 2025-10-05 09:31:22.576160291 +0000 UTC m=+0.055758660 container cleanup 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, managed_by=edpm_ansible) Oct 5 05:31:22 localhost podman[224453]: multipathd Oct 5 05:31:22 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Oct 5 05:31:22 localhost systemd[1]: Stopped multipathd container. Oct 5 05:31:22 localhost systemd[1]: Starting multipathd container... Oct 5 05:31:22 localhost systemd[1]: Started libcrun container. Oct 5 05:31:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745d177b6ec964267276a1dd5886744a31f62e38d9152b0018ab01837772c364/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745d177b6ec964267276a1dd5886744a31f62e38d9152b0018ab01837772c364/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:31:22 localhost podman[224465]: 2025-10-05 09:31:22.725708847 +0000 UTC m=+0.118675791 container init 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2) Oct 5 05:31:22 localhost multipathd[224479]: + sudo -E kolla_set_configs Oct 5 05:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:31:22 localhost podman[224465]: 2025-10-05 09:31:22.761014286 +0000 UTC m=+0.153981170 container start 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:31:22 localhost podman[224465]: multipathd Oct 5 05:31:22 localhost systemd[1]: Started multipathd container. Oct 5 05:31:22 localhost multipathd[224479]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:31:22 localhost multipathd[224479]: INFO:__main__:Validating config file Oct 5 05:31:22 localhost multipathd[224479]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:31:22 localhost multipathd[224479]: INFO:__main__:Writing out command to execute Oct 5 05:31:22 localhost multipathd[224479]: ++ cat /run_command Oct 5 05:31:22 localhost multipathd[224479]: + CMD='/usr/sbin/multipathd -d' Oct 5 05:31:22 localhost multipathd[224479]: + ARGS= Oct 5 05:31:22 localhost multipathd[224479]: + sudo kolla_copy_cacerts Oct 5 05:31:22 localhost podman[224488]: 2025-10-05 09:31:22.825056225 +0000 UTC m=+0.063113477 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:31:22 localhost multipathd[224479]: + [[ ! -n '' ]] Oct 5 05:31:22 localhost multipathd[224479]: + . kolla_extend_start Oct 5 05:31:22 localhost multipathd[224479]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Oct 5 05:31:22 localhost multipathd[224479]: Running command: '/usr/sbin/multipathd -d' Oct 5 05:31:22 localhost multipathd[224479]: + umask 0022 Oct 5 05:31:22 localhost multipathd[224479]: + exec /usr/sbin/multipathd -d Oct 5 05:31:22 localhost multipathd[224479]: 10158.990248 | --------start up-------- Oct 5 05:31:22 localhost multipathd[224479]: 10158.990263 | read /etc/multipath.conf Oct 5 05:31:22 localhost multipathd[224479]: 10158.992517 | path checkers start up Oct 5 05:31:22 localhost podman[224488]: 2025-10-05 09:31:22.861050272 +0000 UTC m=+0.099107494 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 05:31:22 localhost podman[224488]: unhealthy Oct 5 05:31:22 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:31:22 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Failed with result 'exit-code'. Oct 5 05:31:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62500 DF PROTO=TCP SPT=52022 DPT=9105 SEQ=3547452318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762AA770000000001030307) Oct 5 05:31:24 localhost python3.9[224627]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:25 localhost python3.9[224737]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:31:25 localhost podman[224809]: 2025-10-05 09:31:25.916893159 +0000 UTC m=+0.084389039 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:31:25 localhost podman[224809]: 2025-10-05 09:31:25.961147426 +0000 UTC m=+0.128643306 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:31:25 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:31:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2787 DF PROTO=TCP SPT=49534 DPT=9882 SEQ=1602772664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762B2DD0000000001030307) Oct 5 05:31:26 localhost python3.9[224869]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Oct 5 05:31:26 localhost python3.9[224989]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:31:27 localhost python3.9[225077]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656686.4181335-2633-243486133915425/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22333 DF PROTO=TCP SPT=38496 DPT=9100 SEQ=1363739768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762BAF60000000001030307) Oct 5 05:31:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:31:28 localhost podman[225188]: 2025-10-05 09:31:28.200667096 +0000 UTC m=+0.080393628 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:31:28 localhost podman[225188]: 2025-10-05 09:31:28.206329786 +0000 UTC m=+0.086056278 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:31:28 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:31:28 localhost python3.9[225187]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:29 localhost python3.9[225315]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:31:29 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 5 05:31:29 localhost systemd[1]: Stopped Load Kernel Modules. Oct 5 05:31:29 localhost systemd[1]: Stopping Load Kernel Modules... Oct 5 05:31:29 localhost systemd[1]: Starting Load Kernel Modules... Oct 5 05:31:29 localhost systemd-modules-load[225319]: Module 'msr' is built in Oct 5 05:31:29 localhost systemd[1]: Finished Load Kernel Modules. Oct 5 05:31:30 localhost python3.9[225429]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:31:31 localhost python3.9[225492]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:31:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6006 DF PROTO=TCP SPT=50708 DPT=9102 SEQ=1094439225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762CA360000000001030307) Oct 5 05:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26880 DF PROTO=TCP SPT=41502 DPT=9101 SEQ=723562009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762D2220000000001030307) Oct 5 05:31:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26882 DF PROTO=TCP SPT=41502 DPT=9101 SEQ=723562009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762DE370000000001030307) Oct 5 05:31:38 localhost systemd[1]: Reloading. Oct 5 05:31:38 localhost systemd-rc-local-generator[225610]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:38 localhost systemd-sysv-generator[225615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:39 localhost systemd[1]: Reloading. Oct 5 05:31:39 localhost systemd-sysv-generator[225650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:39 localhost systemd-rc-local-generator[225645]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:39 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Oct 5 05:31:39 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 5 05:31:39 localhost lvm[225699]: PV /dev/loop4 online, VG ceph_vg1 is complete. Oct 5 05:31:39 localhost lvm[225699]: VG ceph_vg1 finished Oct 5 05:31:39 localhost lvm[225698]: PV /dev/loop3 online, VG ceph_vg0 is complete. Oct 5 05:31:39 localhost lvm[225698]: VG ceph_vg0 finished Oct 5 05:31:39 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Oct 5 05:31:39 localhost systemd[1]: Starting man-db-cache-update.service... Oct 5 05:31:39 localhost systemd[1]: Reloading. Oct 5 05:31:39 localhost systemd-sysv-generator[225752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:39 localhost systemd-rc-local-generator[225746]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:40 localhost systemd[1]: Queuing reload/restart jobs for marked units… Oct 5 05:31:40 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Oct 5 05:31:40 localhost systemd[1]: Finished man-db-cache-update.service. Oct 5 05:31:40 localhost systemd[1]: man-db-cache-update.service: Consumed 1.080s CPU time. Oct 5 05:31:40 localhost systemd[1]: run-rbbcab2114bc4419ab53931c9b06ca770.service: Deactivated successfully. Oct 5 05:31:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26883 DF PROTO=TCP SPT=41502 DPT=9101 SEQ=723562009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC762EDF60000000001030307) Oct 5 05:31:41 localhost python3.9[226995]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:42 localhost python3.9[227103]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:31:43 localhost python3.9[227217]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:31:45 localhost python3.9[227327]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:31:45 localhost systemd[1]: Reloading. Oct 5 05:31:45 localhost systemd-sysv-generator[227357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:31:45 localhost systemd-rc-local-generator[227352]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:31:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:46 localhost python3.9[227471]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:31:46 localhost network[227488]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:31:46 localhost network[227489]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:31:46 localhost network[227490]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33309 DF PROTO=TCP SPT=44028 DPT=9105 SEQ=888038735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76303CA0000000001030307) Oct 5 05:31:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33310 DF PROTO=TCP SPT=44028 DPT=9105 SEQ=888038735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76307B60000000001030307) Oct 5 05:31:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33311 DF PROTO=TCP SPT=44028 DPT=9105 SEQ=888038735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7630FB60000000001030307) Oct 5 05:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:31:50 localhost systemd[1]: tmp-crun.c8OxCE.mount: Deactivated successfully. Oct 5 05:31:50 localhost podman[227634]: 2025-10-05 09:31:50.890970844 +0000 UTC m=+0.063697347 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:31:50 localhost podman[227634]: 2025-10-05 09:31:50.900088645 +0000 UTC m=+0.072815148 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 05:31:50 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:31:51 localhost python3.9[227745]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:52 localhost python3.9[227856]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:52 localhost python3.9[227967]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:31:53 localhost podman[227969]: 2025-10-05 09:31:53.074305366 +0000 UTC m=+0.083352457 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 05:31:53 localhost podman[227969]: 2025-10-05 09:31:53.116205895 +0000 UTC m=+0.125253026 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, config_id=multipathd) Oct 5 05:31:53 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:31:53 localhost python3.9[228097]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33312 DF PROTO=TCP SPT=44028 DPT=9105 SEQ=888038735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7631F770000000001030307) Oct 5 05:31:54 localhost python3.9[228208]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:55 localhost python3.9[228319]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:55 localhost python3.9[228430]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25660 DF PROTO=TCP SPT=60994 DPT=9882 SEQ=2264347691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763280C0000000001030307) Oct 5 05:31:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:31:56 localhost podman[228542]: 2025-10-05 09:31:56.327811186 +0000 UTC m=+0.063388459 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:31:56 localhost podman[228542]: 2025-10-05 09:31:56.363722657 +0000 UTC m=+0.099299960 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 05:31:56 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:31:56 localhost python3.9[228541]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:31:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31191 DF PROTO=TCP SPT=42654 DPT=9100 SEQ=4097858310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76330370000000001030307) Oct 5 05:31:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:31:58 localhost podman[228583]: 2025-10-05 09:31:58.906320778 +0000 UTC m=+0.074788520 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:31:58 localhost podman[228583]: 2025-10-05 09:31:58.916159379 +0000 UTC m=+0.084627121 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:31:58 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:32:00 localhost python3.9[228693]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:00 localhost python3.9[228803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:01 localhost python3.9[228913]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:01 localhost python3.9[229023]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62883 DF PROTO=TCP SPT=55560 DPT=9102 SEQ=4232054541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7633F360000000001030307) Oct 5 05:32:02 localhost python3.9[229133]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:02 localhost python3.9[229243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:03 localhost python3.9[229353]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33856 DF PROTO=TCP SPT=46102 DPT=9101 SEQ=573394140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76347530000000001030307) Oct 5 05:32:04 localhost python3.9[229463]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:05 localhost python3.9[229573]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:05 localhost python3.9[229683]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:06 localhost python3.9[229793]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:06 localhost python3.9[229903]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33858 DF PROTO=TCP SPT=46102 DPT=9101 SEQ=573394140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76353760000000001030307) Oct 5 05:32:07 localhost python3.9[230013]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:07 localhost python3.9[230123]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:08 localhost python3.9[230233]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:09 localhost python3.9[230343]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:10 localhost python3.9[230453]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33859 DF PROTO=TCP SPT=46102 DPT=9101 SEQ=573394140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76363360000000001030307) Oct 5 05:32:12 localhost python3.9[230563]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:32:13 localhost python3.9[230673]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:32:13 localhost systemd[1]: Reloading. Oct 5 05:32:13 localhost systemd-sysv-generator[230699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:32:13 localhost systemd-rc-local-generator[230695]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:32:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:32:14 localhost python3.9[230819]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:14 localhost python3.9[230930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:16 localhost python3.9[231041]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30720 DF PROTO=TCP SPT=48556 DPT=9105 SEQ=2383811369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76378FA0000000001030307) Oct 5 05:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30721 DF PROTO=TCP SPT=48556 DPT=9105 SEQ=2383811369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7637CF70000000001030307) Oct 5 05:32:18 localhost python3.9[231152]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:18 localhost python3.9[231263]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:19 localhost python3.9[231374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30722 DF PROTO=TCP SPT=48556 DPT=9105 SEQ=2383811369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76384F70000000001030307) Oct 5 05:32:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:32:20.366 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:32:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:32:20.367 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:32:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:32:20.368 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:32:21 localhost python3.9[231485]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:32:21 localhost podman[231487]: 2025-10-05 09:32:21.157669297 +0000 UTC m=+0.083477111 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2) Oct 5 05:32:21 localhost podman[231487]: 2025-10-05 09:32:21.173032384 +0000 UTC m=+0.098840158 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:32:21 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:32:21 localhost python3.9[231615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:32:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:32:23 localhost podman[231727]: 2025-10-05 09:32:23.687968334 +0000 UTC m=+0.074672238 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Oct 5 05:32:23 localhost podman[231727]: 2025-10-05 09:32:23.702234722 +0000 UTC m=+0.088938576 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:32:23 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:32:23 localhost python3.9[231726]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30723 DF PROTO=TCP SPT=48556 DPT=9105 SEQ=2383811369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76394B60000000001030307) Oct 5 05:32:24 localhost python3.9[231855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:25 localhost python3.9[231965]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:25 localhost python3.9[232075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9173 DF PROTO=TCP SPT=53250 DPT=9100 SEQ=3434901398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7639D360000000001030307) Oct 5 05:32:26 localhost python3.9[232185]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:32:26 localhost podman[232296]: 2025-10-05 09:32:26.833201157 +0000 UTC m=+0.078004246 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:32:26 localhost podman[232296]: 2025-10-05 09:32:26.899162214 +0000 UTC m=+0.143965303 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:32:26 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:32:26 localhost python3.9[232295]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:27 localhost python3.9[232429]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9174 DF PROTO=TCP SPT=53250 DPT=9100 SEQ=3434901398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763A5360000000001030307) Oct 5 05:32:28 localhost python3.9[232539]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:28 localhost python3.9[232649]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:32:29 localhost podman[232760]: 2025-10-05 09:32:29.315165504 +0000 UTC m=+0.080914373 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:32:29 localhost podman[232760]: 2025-10-05 09:32:29.320357962 +0000 UTC m=+0.086106801 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:32:29 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:32:29 localhost python3.9[232759]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:30 localhost python3.9[232888]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:31 localhost python3.9[232998]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30191 DF PROTO=TCP SPT=50724 DPT=9102 SEQ=3144123560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763B4760000000001030307) Oct 5 05:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9867 DF PROTO=TCP SPT=53866 DPT=9101 SEQ=2638407367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763BC830000000001030307) Oct 5 05:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9869 DF PROTO=TCP SPT=53866 DPT=9101 SEQ=2638407367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763C8760000000001030307) Oct 5 05:32:38 localhost python3.9[233144]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Oct 5 05:32:39 localhost python3.9[233288]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:32:40 localhost python3.9[233422]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471152.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 5 05:32:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9870 DF PROTO=TCP SPT=53866 DPT=9101 SEQ=2638407367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763D8370000000001030307) Oct 5 05:32:41 localhost sshd[233448]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:32:41 localhost systemd-logind[760]: New session 57 of user zuul. Oct 5 05:32:41 localhost systemd[1]: Started Session 57 of User zuul. Oct 5 05:32:41 localhost systemd[1]: session-57.scope: Deactivated successfully. Oct 5 05:32:41 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Oct 5 05:32:41 localhost systemd-logind[760]: Removed session 57. Oct 5 05:32:42 localhost python3.9[233559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:43 localhost python3.9[233645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656762.5323231-4270-187446806389945/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:44 localhost python3.9[233753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:45 localhost python3.9[233808]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:45 localhost python3.9[233916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:46 localhost python3.9[234002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656765.4072862-4270-51621923962747/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62458 DF PROTO=TCP SPT=45994 DPT=9105 SEQ=1745645564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763EE2A0000000001030307) Oct 5 05:32:46 localhost python3.9[234110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:47 localhost python3.9[234196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656766.4773588-4270-107229162296445/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=be143462936c4f6b37574d8a4ad49679def80d15 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62459 DF PROTO=TCP SPT=45994 DPT=9105 SEQ=1745645564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763F2360000000001030307) Oct 5 05:32:47 localhost python3.9[234304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:48 localhost python3.9[234390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656767.5790288-4270-9422232601256/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:49 localhost python3.9[234500]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62460 DF PROTO=TCP SPT=45994 DPT=9105 SEQ=1745645564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC763FA360000000001030307) Oct 5 05:32:50 localhost python3.9[234610]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:50 localhost python3.9[234720]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:32:51 localhost podman[234833]: 2025-10-05 09:32:51.497202465 +0000 UTC m=+0.075871494 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, managed_by=edpm_ansible) Oct 5 05:32:51 localhost podman[234833]: 2025-10-05 09:32:51.507213907 +0000 UTC m=+0.085882916 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true) Oct 5 05:32:51 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:32:51 localhost python3.9[234832]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:32:52 localhost python3.9[234960]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:32:53 localhost python3.9[235070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:32:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62461 DF PROTO=TCP SPT=45994 DPT=9105 SEQ=1745645564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76409F60000000001030307) Oct 5 05:32:53 localhost podman[235071]: 2025-10-05 09:32:53.917276355 +0000 UTC m=+0.086881432 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:32:53 localhost podman[235071]: 2025-10-05 09:32:53.957138017 +0000 UTC m=+0.126743074 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd) Oct 5 05:32:53 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:32:54 localhost python3.9[235175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656773.3571239-4603-17465345363609/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:55 localhost python3.9[235283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:32:55 localhost python3.9[235369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656774.959123-4649-226140555576896/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:32:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21582 DF PROTO=TCP SPT=44024 DPT=9882 SEQ=632359101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764126C0000000001030307) Oct 5 05:32:56 localhost python3.9[235479]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Oct 5 05:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:32:57 localhost podman[235590]: 2025-10-05 09:32:57.438933704 +0000 UTC m=+0.066806966 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:32:57 localhost podman[235590]: 2025-10-05 09:32:57.475925242 +0000 UTC m=+0.103798474 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:32:57 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:32:57 localhost python3.9[235589]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:32:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33089 DF PROTO=TCP SPT=46142 DPT=9100 SEQ=2859840203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7641A760000000001030307) Oct 5 05:32:58 localhost python3[235725]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:32:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:32:59 localhost podman[235752]: 2025-10-05 09:32:59.934796284 +0000 UTC m=+0.106480024 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:32:59 localhost podman[235752]: 2025-10-05 09:32:59.973594338 +0000 UTC m=+0.145278068 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Oct 5 05:32:59 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:33:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18731 DF PROTO=TCP SPT=52926 DPT=9102 SEQ=3163855300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76429B70000000001030307) Oct 5 05:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23524 DF PROTO=TCP SPT=57676 DPT=9101 SEQ=3960333717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76431B30000000001030307) Oct 5 05:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23526 DF PROTO=TCP SPT=57676 DPT=9101 SEQ=3960333717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7643DB60000000001030307) Oct 5 05:33:09 localhost podman[235739]: 2025-10-05 09:32:58.699180191 +0000 UTC m=+0.043040656 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 5 05:33:09 localhost podman[235818]: Oct 5 05:33:09 localhost podman[235818]: 2025-10-05 09:33:09.414075727 +0000 UTC m=+0.089904911 container create 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:33:09 localhost podman[235818]: 2025-10-05 09:33:09.371305539 +0000 UTC m=+0.047134763 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 5 05:33:09 localhost python3[235725]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Oct 5 05:33:10 localhost python3.9[235965]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23527 DF PROTO=TCP SPT=57676 DPT=9101 SEQ=3960333717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7644D770000000001030307) Oct 5 05:33:11 localhost python3.9[236077]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Oct 5 05:33:12 localhost python3.9[236187]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:33:13 localhost python3[236297]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:33:13 localhost python3[236297]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0d460c957a79c0fa941447cb00e5ab934f0ccc1442862d4e417ff427bd26aed9",#012 "Digest": "sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:32:21.432647731Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207527293,#012 "VirtualSize": 1207527293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:6a39f36d67f67acbd99daa43f5f54c2ceabda19dd25b824285c9338b74a7494e",#012 "sha256:9a26e1dd0ae990be1ae7a87aaaac389265f77f7100ea3ac633d95d89956449a4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 5 05:33:13 localhost podman[236349]: 2025-10-05 09:33:13.642715353 +0000 UTC m=+0.074793936 container remove 700a17e7173921c98761fca5b501161649a30de2f6f7ac44e1c976e012e910ef (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=nova_compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '4f35ee3aff3ccdd22a731d50021565d5-5d5b173631792e25c080b07e9b3e041b'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public) Oct 5 05:33:13 localhost python3[236297]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Oct 5 05:33:13 localhost podman[236363]: Oct 5 05:33:13 localhost podman[236363]: 2025-10-05 09:33:13.741987677 +0000 UTC m=+0.082271531 container create c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:33:13 localhost podman[236363]: 2025-10-05 09:33:13.70154283 +0000 UTC m=+0.041826714 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Oct 5 05:33:13 localhost python3[236297]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Oct 5 05:33:15 localhost python3.9[236509]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:16 localhost python3.9[236621]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38593 DF PROTO=TCP SPT=35986 DPT=9105 SEQ=2505543731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764635A0000000001030307) Oct 5 05:33:17 localhost python3.9[236730]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656796.1640697-4923-231610922016795/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38594 DF PROTO=TCP SPT=35986 DPT=9105 SEQ=2505543731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76467760000000001030307) Oct 5 05:33:18 localhost python3.9[236785]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:33:18 localhost systemd[1]: Reloading. Oct 5 05:33:18 localhost systemd-sysv-generator[236815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:18 localhost systemd-rc-local-generator[236809]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:19 localhost python3.9[236875]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:33:19 localhost systemd[1]: Reloading. Oct 5 05:33:19 localhost systemd-sysv-generator[236902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:19 localhost systemd-rc-local-generator[236899]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:19 localhost systemd[1]: Starting nova_compute container... Oct 5 05:33:19 localhost systemd[1]: Started libcrun container. Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:19 localhost podman[236916]: 2025-10-05 09:33:19.561028208 +0000 UTC m=+0.128224151 container init c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm) Oct 5 05:33:19 localhost podman[236916]: 2025-10-05 09:33:19.569666624 +0000 UTC m=+0.136862527 container start c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:33:19 localhost podman[236916]: nova_compute Oct 5 05:33:19 localhost nova_compute[236931]: + sudo -E kolla_set_configs Oct 5 05:33:19 localhost systemd[1]: Started nova_compute container. Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Validating config file Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying service configuration files Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Deleting /etc/ceph Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Creating directory /etc/ceph Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/ceph Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Writing out command to execute Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:19 localhost nova_compute[236931]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:19 localhost nova_compute[236931]: ++ cat /run_command Oct 5 05:33:19 localhost nova_compute[236931]: + CMD=nova-compute Oct 5 05:33:19 localhost nova_compute[236931]: + ARGS= Oct 5 05:33:19 localhost nova_compute[236931]: + sudo kolla_copy_cacerts Oct 5 05:33:19 localhost nova_compute[236931]: + [[ ! -n '' ]] Oct 5 05:33:19 localhost nova_compute[236931]: + . kolla_extend_start Oct 5 05:33:19 localhost nova_compute[236931]: Running command: 'nova-compute' Oct 5 05:33:19 localhost nova_compute[236931]: + echo 'Running command: '\''nova-compute'\''' Oct 5 05:33:19 localhost nova_compute[236931]: + umask 0022 Oct 5 05:33:19 localhost nova_compute[236931]: + exec nova-compute Oct 5 05:33:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38595 DF PROTO=TCP SPT=35986 DPT=9105 SEQ=2505543731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7646F770000000001030307) Oct 5 05:33:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:33:20.367 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:33:20.368 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:33:20.368 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:20 localhost python3.9[237051]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.351 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.351 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.352 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.352 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.462 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.483 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:21 localhost python3.9[237163]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:33:21 localhost podman[237181]: 2025-10-05 09:33:21.901268442 +0000 UTC m=+0.071994813 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:33:21 localhost podman[237181]: 2025-10-05 09:33:21.912940336 +0000 UTC m=+0.083666707 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:33:21 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:33:21 localhost nova_compute[236931]: 2025-10-05 09:33:21.938 2 INFO nova.virt.driver [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.048 2 INFO nova.compute.provider_config [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.055 2 WARNING nova.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.055 2 DEBUG oslo_concurrency.lockutils [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.056 2 DEBUG oslo_concurrency.lockutils [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.056 2 DEBUG oslo_concurrency.lockutils [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.056 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.057 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.058 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] console_host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.059 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.060 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.061 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.062 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.063 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.064 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.065 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.066 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.067 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.068 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.069 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.070 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.071 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.072 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.073 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.074 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.075 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.076 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.077 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.078 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.079 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.080 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.081 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.082 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.083 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.084 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.085 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.086 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.087 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.088 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.089 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.090 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.091 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.092 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.093 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.094 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.095 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.096 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.097 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.098 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.099 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.100 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.101 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.102 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.103 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.104 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.105 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.106 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.107 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.108 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.109 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.110 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.111 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.112 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.113 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.114 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.115 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.116 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.117 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.118 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.119 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.120 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.121 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.122 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.123 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.124 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.125 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.125 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.125 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.125 2 WARNING oslo_config.cfg [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 5 05:33:22 localhost nova_compute[236931]: live_migration_uri is deprecated for removal in favor of two other options that Oct 5 05:33:22 localhost nova_compute[236931]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 5 05:33:22 localhost nova_compute[236931]: and ``live_migration_inbound_addr`` respectively. Oct 5 05:33:22 localhost nova_compute[236931]: ). Its value may be silently ignored in the future.#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.125 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.125 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.126 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.127 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rbd_secret_uuid = 659062ac-50b4-5607-b699-3105da7f55ee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.128 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.129 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.130 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.131 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.132 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.132 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.132 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.132 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.132 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.132 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.133 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.134 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.135 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.136 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.137 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.138 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.139 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.140 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.141 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.142 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.143 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.143 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.143 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.143 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.143 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.143 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.144 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.145 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.145 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.145 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.145 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.145 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.145 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.146 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.147 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.148 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.149 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.150 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.151 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.151 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.151 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.151 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.151 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.151 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.152 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.153 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.153 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.153 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.153 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.153 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.153 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.154 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.155 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.156 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.157 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.158 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.159 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.160 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.161 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.162 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.163 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.164 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.165 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.166 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.167 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.167 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.167 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.167 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.167 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.167 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.168 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.169 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.170 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.171 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.172 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.173 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.174 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.175 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.176 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.177 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.178 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.179 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.180 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.181 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.182 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.183 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.184 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.185 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.185 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.185 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.185 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.185 2 DEBUG oslo_service.service [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.186 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.198 2 INFO nova.virt.node [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Determined node identity 36221146-244b-49ab-8700-5471fa19d0c5 from /var/lib/nova/compute_id#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.198 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.199 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.199 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.199 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 5 05:33:22 localhost systemd[1]: Starting libvirt QEMU daemon... Oct 5 05:33:22 localhost systemd[1]: Started libvirt QEMU daemon. Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.255 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.259 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.259 2 INFO nova.virt.libvirt.driver [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Connection event '1' reason 'None'#033[00m Oct 5 05:33:22 localhost nova_compute[236931]: 2025-10-05 09:33:22.289 2 DEBUG nova.virt.libvirt.volume.mount [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 5 05:33:22 localhost python3.9[237343]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.129 2 INFO nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Libvirt host capabilities Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 26eb4766-c662-4233-bdfd-7faae464b2de Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: x86_64 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v4 Oct 5 05:33:23 localhost nova_compute[236931]: AMD Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: tcp Oct 5 05:33:23 localhost nova_compute[236931]: rdma Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 16116612 Oct 5 05:33:23 localhost nova_compute[236931]: 4029153 Oct 5 05:33:23 localhost nova_compute[236931]: 0 Oct 5 05:33:23 localhost nova_compute[236931]: 0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: selinux Oct 5 05:33:23 localhost nova_compute[236931]: 0 Oct 5 05:33:23 localhost nova_compute[236931]: system_u:system_r:svirt_t:s0 Oct 5 05:33:23 localhost nova_compute[236931]: system_u:system_r:svirt_tcg_t:s0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: dac Oct 5 05:33:23 localhost nova_compute[236931]: 0 Oct 5 05:33:23 localhost nova_compute[236931]: +107:+107 Oct 5 05:33:23 localhost nova_compute[236931]: +107:+107 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: hvm Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 32 Oct 5 05:33:23 localhost nova_compute[236931]: /usr/libexec/qemu-kvm Oct 5 05:33:23 localhost nova_compute[236931]: pc-i440fx-rhel7.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: q35 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.4.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.5.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.3.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel7.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.4.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.2.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.2.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.0.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.0.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.1.0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: hvm Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 64 Oct 5 05:33:23 localhost nova_compute[236931]: /usr/libexec/qemu-kvm Oct 5 05:33:23 localhost nova_compute[236931]: pc-i440fx-rhel7.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: q35 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.4.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.5.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.3.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel7.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.4.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.2.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.2.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.0.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.0.0 Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel8.1.0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: #033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.139 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.158 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/libexec/qemu-kvm Oct 5 05:33:23 localhost nova_compute[236931]: kvm Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: i686 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: rom Oct 5 05:33:23 localhost nova_compute[236931]: pflash Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: yes Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: AMD Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 486 Oct 5 05:33:23 localhost nova_compute[236931]: 486-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Conroe Oct 5 05:33:23 localhost nova_compute[236931]: Conroe-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-IBPB Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v4 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v1 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v2 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v6 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v7 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Penryn Oct 5 05:33:23 localhost nova_compute[236931]: Penryn-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Westmere Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v2 Oct 5 05:33:23 localhost nova_compute[236931]: athlon Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: athlon-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: kvm32 Oct 5 05:33:23 localhost nova_compute[236931]: kvm32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: n270 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: n270-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pentium Oct 5 05:33:23 localhost nova_compute[236931]: pentium-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: phenom Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: phenom-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu32 Oct 5 05:33:23 localhost nova_compute[236931]: qemu32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: file Oct 5 05:33:23 localhost nova_compute[236931]: anonymous Oct 5 05:33:23 localhost nova_compute[236931]: memfd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: disk Oct 5 05:33:23 localhost nova_compute[236931]: cdrom Oct 5 05:33:23 localhost nova_compute[236931]: floppy Oct 5 05:33:23 localhost nova_compute[236931]: lun Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: fdc Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: sata Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: vnc Oct 5 05:33:23 localhost nova_compute[236931]: egl-headless Oct 5 05:33:23 localhost nova_compute[236931]: dbus Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: subsystem Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: mandatory Oct 5 05:33:23 localhost nova_compute[236931]: requisite Oct 5 05:33:23 localhost nova_compute[236931]: optional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: pci Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: random Oct 5 05:33:23 localhost nova_compute[236931]: egd Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: path Oct 5 05:33:23 localhost nova_compute[236931]: handle Oct 5 05:33:23 localhost nova_compute[236931]: virtiofs Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: tpm-tis Oct 5 05:33:23 localhost nova_compute[236931]: tpm-crb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: emulator Oct 5 05:33:23 localhost nova_compute[236931]: external Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 2.0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pty Oct 5 05:33:23 localhost nova_compute[236931]: unix Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: passt Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: isa Oct 5 05:33:23 localhost nova_compute[236931]: hyperv Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: relaxed Oct 5 05:33:23 localhost nova_compute[236931]: vapic Oct 5 05:33:23 localhost nova_compute[236931]: spinlocks Oct 5 05:33:23 localhost nova_compute[236931]: vpindex Oct 5 05:33:23 localhost nova_compute[236931]: runtime Oct 5 05:33:23 localhost nova_compute[236931]: synic Oct 5 05:33:23 localhost nova_compute[236931]: stimer Oct 5 05:33:23 localhost nova_compute[236931]: reset Oct 5 05:33:23 localhost nova_compute[236931]: vendor_id Oct 5 05:33:23 localhost nova_compute[236931]: frequencies Oct 5 05:33:23 localhost nova_compute[236931]: reenlightenment Oct 5 05:33:23 localhost nova_compute[236931]: tlbflush Oct 5 05:33:23 localhost nova_compute[236931]: ipi Oct 5 05:33:23 localhost nova_compute[236931]: avic Oct 5 05:33:23 localhost nova_compute[236931]: emsr_bitmap Oct 5 05:33:23 localhost nova_compute[236931]: xmm_input Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.166 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/libexec/qemu-kvm Oct 5 05:33:23 localhost nova_compute[236931]: kvm Oct 5 05:33:23 localhost nova_compute[236931]: pc-i440fx-rhel7.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: i686 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: rom Oct 5 05:33:23 localhost nova_compute[236931]: pflash Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: yes Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: AMD Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 486 Oct 5 05:33:23 localhost nova_compute[236931]: 486-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Conroe Oct 5 05:33:23 localhost nova_compute[236931]: Conroe-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-IBPB Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v4 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v1 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v2 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v6 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v7 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Penryn Oct 5 05:33:23 localhost nova_compute[236931]: Penryn-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Westmere Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v2 Oct 5 05:33:23 localhost nova_compute[236931]: athlon Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: athlon-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: kvm32 Oct 5 05:33:23 localhost nova_compute[236931]: kvm32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: n270 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: n270-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pentium Oct 5 05:33:23 localhost nova_compute[236931]: pentium-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: phenom Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: phenom-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu32 Oct 5 05:33:23 localhost nova_compute[236931]: qemu32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: file Oct 5 05:33:23 localhost nova_compute[236931]: anonymous Oct 5 05:33:23 localhost nova_compute[236931]: memfd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: disk Oct 5 05:33:23 localhost nova_compute[236931]: cdrom Oct 5 05:33:23 localhost nova_compute[236931]: floppy Oct 5 05:33:23 localhost nova_compute[236931]: lun Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: ide Oct 5 05:33:23 localhost nova_compute[236931]: fdc Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: sata Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: vnc Oct 5 05:33:23 localhost nova_compute[236931]: egl-headless Oct 5 05:33:23 localhost nova_compute[236931]: dbus Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: subsystem Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: mandatory Oct 5 05:33:23 localhost nova_compute[236931]: requisite Oct 5 05:33:23 localhost nova_compute[236931]: optional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: pci Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: random Oct 5 05:33:23 localhost nova_compute[236931]: egd Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: path Oct 5 05:33:23 localhost nova_compute[236931]: handle Oct 5 05:33:23 localhost nova_compute[236931]: virtiofs Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: tpm-tis Oct 5 05:33:23 localhost nova_compute[236931]: tpm-crb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: emulator Oct 5 05:33:23 localhost nova_compute[236931]: external Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 2.0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pty Oct 5 05:33:23 localhost nova_compute[236931]: unix Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: passt Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: isa Oct 5 05:33:23 localhost nova_compute[236931]: hyperv Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: relaxed Oct 5 05:33:23 localhost nova_compute[236931]: vapic Oct 5 05:33:23 localhost nova_compute[236931]: spinlocks Oct 5 05:33:23 localhost nova_compute[236931]: vpindex Oct 5 05:33:23 localhost nova_compute[236931]: runtime Oct 5 05:33:23 localhost nova_compute[236931]: synic Oct 5 05:33:23 localhost nova_compute[236931]: stimer Oct 5 05:33:23 localhost nova_compute[236931]: reset Oct 5 05:33:23 localhost nova_compute[236931]: vendor_id Oct 5 05:33:23 localhost nova_compute[236931]: frequencies Oct 5 05:33:23 localhost nova_compute[236931]: reenlightenment Oct 5 05:33:23 localhost nova_compute[236931]: tlbflush Oct 5 05:33:23 localhost nova_compute[236931]: ipi Oct 5 05:33:23 localhost nova_compute[236931]: avic Oct 5 05:33:23 localhost nova_compute[236931]: emsr_bitmap Oct 5 05:33:23 localhost nova_compute[236931]: xmm_input Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.212 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.217 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/libexec/qemu-kvm Oct 5 05:33:23 localhost nova_compute[236931]: kvm Oct 5 05:33:23 localhost nova_compute[236931]: pc-q35-rhel9.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: x86_64 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: efi Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: rom Oct 5 05:33:23 localhost nova_compute[236931]: pflash Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: yes Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: yes Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: AMD Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 486 Oct 5 05:33:23 localhost nova_compute[236931]: 486-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Conroe Oct 5 05:33:23 localhost nova_compute[236931]: Conroe-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-IBPB Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v4 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v1 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v2 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v6 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v7 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Penryn Oct 5 05:33:23 localhost nova_compute[236931]: Penryn-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Westmere Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v2 Oct 5 05:33:23 localhost nova_compute[236931]: athlon Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: athlon-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: kvm32 Oct 5 05:33:23 localhost nova_compute[236931]: kvm32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: n270 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: n270-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pentium Oct 5 05:33:23 localhost nova_compute[236931]: pentium-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: phenom Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: phenom-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu32 Oct 5 05:33:23 localhost nova_compute[236931]: qemu32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: file Oct 5 05:33:23 localhost nova_compute[236931]: anonymous Oct 5 05:33:23 localhost nova_compute[236931]: memfd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: disk Oct 5 05:33:23 localhost nova_compute[236931]: cdrom Oct 5 05:33:23 localhost nova_compute[236931]: floppy Oct 5 05:33:23 localhost nova_compute[236931]: lun Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: fdc Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: sata Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: vnc Oct 5 05:33:23 localhost nova_compute[236931]: egl-headless Oct 5 05:33:23 localhost nova_compute[236931]: dbus Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: subsystem Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: mandatory Oct 5 05:33:23 localhost nova_compute[236931]: requisite Oct 5 05:33:23 localhost nova_compute[236931]: optional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: pci Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: random Oct 5 05:33:23 localhost nova_compute[236931]: egd Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: path Oct 5 05:33:23 localhost nova_compute[236931]: handle Oct 5 05:33:23 localhost nova_compute[236931]: virtiofs Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: tpm-tis Oct 5 05:33:23 localhost nova_compute[236931]: tpm-crb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: emulator Oct 5 05:33:23 localhost nova_compute[236931]: external Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 2.0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pty Oct 5 05:33:23 localhost nova_compute[236931]: unix Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: passt Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: isa Oct 5 05:33:23 localhost nova_compute[236931]: hyperv Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: relaxed Oct 5 05:33:23 localhost nova_compute[236931]: vapic Oct 5 05:33:23 localhost nova_compute[236931]: spinlocks Oct 5 05:33:23 localhost nova_compute[236931]: vpindex Oct 5 05:33:23 localhost nova_compute[236931]: runtime Oct 5 05:33:23 localhost nova_compute[236931]: synic Oct 5 05:33:23 localhost nova_compute[236931]: stimer Oct 5 05:33:23 localhost nova_compute[236931]: reset Oct 5 05:33:23 localhost nova_compute[236931]: vendor_id Oct 5 05:33:23 localhost nova_compute[236931]: frequencies Oct 5 05:33:23 localhost nova_compute[236931]: reenlightenment Oct 5 05:33:23 localhost nova_compute[236931]: tlbflush Oct 5 05:33:23 localhost nova_compute[236931]: ipi Oct 5 05:33:23 localhost nova_compute[236931]: avic Oct 5 05:33:23 localhost nova_compute[236931]: emsr_bitmap Oct 5 05:33:23 localhost nova_compute[236931]: xmm_input Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.269 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/libexec/qemu-kvm Oct 5 05:33:23 localhost nova_compute[236931]: kvm Oct 5 05:33:23 localhost nova_compute[236931]: pc-i440fx-rhel7.6.0 Oct 5 05:33:23 localhost nova_compute[236931]: x86_64 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: rom Oct 5 05:33:23 localhost nova_compute[236931]: pflash Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: yes Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: no Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: on Oct 5 05:33:23 localhost nova_compute[236931]: off Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: AMD Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 486 Oct 5 05:33:23 localhost nova_compute[236931]: 486-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Broadwell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cascadelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Conroe Oct 5 05:33:23 localhost nova_compute[236931]: Conroe-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Cooperlake-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Denverton-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Dhyana-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Genoa-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-IBPB Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Milan-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-Rome-v4 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v1 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v2 Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: EPYC-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: GraniteRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Haswell-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-noTSX Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v6 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Icelake-Server-v7 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: IvyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: KnightsMill-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Nehalem-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G1-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G4-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Opteron_G5-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Penryn Oct 5 05:33:23 localhost nova_compute[236931]: Penryn-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: SandyBridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SapphireRapids-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: SierraForest-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Client-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-noTSX-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Skylake-Server-v5 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v2 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v3 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Snowridge-v4 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Westmere Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-IBRS Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Westmere-v2 Oct 5 05:33:23 localhost nova_compute[236931]: athlon Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: athlon-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: core2duo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: coreduo-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: kvm32 Oct 5 05:33:23 localhost nova_compute[236931]: kvm32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64 Oct 5 05:33:23 localhost nova_compute[236931]: kvm64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: n270 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: n270-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pentium Oct 5 05:33:23 localhost nova_compute[236931]: pentium-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2 Oct 5 05:33:23 localhost nova_compute[236931]: pentium2-v1 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3 Oct 5 05:33:23 localhost nova_compute[236931]: pentium3-v1 Oct 5 05:33:23 localhost nova_compute[236931]: phenom Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: phenom-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu32 Oct 5 05:33:23 localhost nova_compute[236931]: qemu32-v1 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64 Oct 5 05:33:23 localhost nova_compute[236931]: qemu64-v1 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: file Oct 5 05:33:23 localhost nova_compute[236931]: anonymous Oct 5 05:33:23 localhost nova_compute[236931]: memfd Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: disk Oct 5 05:33:23 localhost nova_compute[236931]: cdrom Oct 5 05:33:23 localhost nova_compute[236931]: floppy Oct 5 05:33:23 localhost nova_compute[236931]: lun Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: ide Oct 5 05:33:23 localhost nova_compute[236931]: fdc Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: sata Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: vnc Oct 5 05:33:23 localhost nova_compute[236931]: egl-headless Oct 5 05:33:23 localhost nova_compute[236931]: dbus Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: subsystem Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: mandatory Oct 5 05:33:23 localhost nova_compute[236931]: requisite Oct 5 05:33:23 localhost nova_compute[236931]: optional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: pci Oct 5 05:33:23 localhost nova_compute[236931]: scsi Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: virtio Oct 5 05:33:23 localhost nova_compute[236931]: virtio-transitional Oct 5 05:33:23 localhost nova_compute[236931]: virtio-non-transitional Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: random Oct 5 05:33:23 localhost nova_compute[236931]: egd Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: path Oct 5 05:33:23 localhost nova_compute[236931]: handle Oct 5 05:33:23 localhost nova_compute[236931]: virtiofs Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: tpm-tis Oct 5 05:33:23 localhost nova_compute[236931]: tpm-crb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: emulator Oct 5 05:33:23 localhost nova_compute[236931]: external Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: 2.0 Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: usb Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: pty Oct 5 05:33:23 localhost nova_compute[236931]: unix Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: qemu Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: builtin Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: default Oct 5 05:33:23 localhost nova_compute[236931]: passt Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: isa Oct 5 05:33:23 localhost nova_compute[236931]: hyperv Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: relaxed Oct 5 05:33:23 localhost nova_compute[236931]: vapic Oct 5 05:33:23 localhost nova_compute[236931]: spinlocks Oct 5 05:33:23 localhost nova_compute[236931]: vpindex Oct 5 05:33:23 localhost nova_compute[236931]: runtime Oct 5 05:33:23 localhost nova_compute[236931]: synic Oct 5 05:33:23 localhost nova_compute[236931]: stimer Oct 5 05:33:23 localhost nova_compute[236931]: reset Oct 5 05:33:23 localhost nova_compute[236931]: vendor_id Oct 5 05:33:23 localhost nova_compute[236931]: frequencies Oct 5 05:33:23 localhost nova_compute[236931]: reenlightenment Oct 5 05:33:23 localhost nova_compute[236931]: tlbflush Oct 5 05:33:23 localhost nova_compute[236931]: ipi Oct 5 05:33:23 localhost nova_compute[236931]: avic Oct 5 05:33:23 localhost nova_compute[236931]: emsr_bitmap Oct 5 05:33:23 localhost nova_compute[236931]: xmm_input Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: Oct 5 05:33:23 localhost nova_compute[236931]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.315 2 DEBUG nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.315 2 INFO nova.virt.libvirt.host [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Secure Boot support detected#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.317 2 INFO nova.virt.libvirt.driver [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.317 2 INFO nova.virt.libvirt.driver [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.329 2 DEBUG nova.virt.libvirt.driver [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.345 2 INFO nova.virt.node [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Determined node identity 36221146-244b-49ab-8700-5471fa19d0c5 from /var/lib/nova/compute_id#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.361 2 DEBUG nova.compute.manager [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Verified node 36221146-244b-49ab-8700-5471fa19d0c5 matches my host np0005471152.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 5 05:33:23 localhost nova_compute[236931]: 2025-10-05 09:33:23.383 2 INFO nova.compute.manager [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 5 05:33:23 localhost python3.9[237465]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:33:23 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 115.9 (386 of 333 items), suggesting rotation. Oct 5 05:33:23 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:33:23 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38596 DF PROTO=TCP SPT=35986 DPT=9105 SEQ=2505543731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7647F360000000001030307) Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.000 2 INFO nova.service [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Updating service version for nova-compute on np0005471152.localdomain from 57 to 66#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.042 2 DEBUG oslo_concurrency.lockutils [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.042 2 DEBUG oslo_concurrency.lockutils [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.043 2 DEBUG oslo_concurrency.lockutils [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.043 2 DEBUG nova.compute.resource_tracker [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.044 2 DEBUG oslo_concurrency.processutils [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:33:24 localhost podman[237620]: 2025-10-05 09:33:24.481289921 +0000 UTC m=+0.092838618 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:33:24 localhost podman[237620]: 2025-10-05 09:33:24.489334381 +0000 UTC m=+0.100882988 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd) Oct 5 05:33:24 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.541 2 DEBUG oslo_concurrency.processutils [None req-580d6e50-93f9-4b42-84a8-5f8659df8d1a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:24 localhost systemd[1]: Starting libvirt nodedev daemon... Oct 5 05:33:24 localhost systemd[1]: Started libvirt nodedev daemon. Oct 5 05:33:24 localhost python3.9[237621]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:33:24 localhost systemd[1]: Stopping nova_compute container... Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.805 2 DEBUG oslo_concurrency.lockutils [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.806 2 DEBUG oslo_concurrency.lockutils [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:33:24 localhost nova_compute[236931]: 2025-10-05 09:33:24.806 2 DEBUG oslo_concurrency.lockutils [None req-4a9407bc-78a8-4acb-b4b1-8c8431db1fd9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:33:25 localhost journal[237275]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, ) Oct 5 05:33:25 localhost journal[237275]: hostname: np0005471152.localdomain Oct 5 05:33:25 localhost journal[237275]: End of file while reading data: Input/output error Oct 5 05:33:25 localhost systemd[1]: libpod-c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5.scope: Deactivated successfully. Oct 5 05:33:25 localhost systemd[1]: libpod-c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5.scope: Consumed 3.116s CPU time. Oct 5 05:33:25 localhost podman[237664]: 2025-10-05 09:33:25.132220593 +0000 UTC m=+0.380767393 container died c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute) Oct 5 05:33:25 localhost systemd[1]: var-lib-containers-storage-overlay-625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a-merged.mount: Deactivated successfully. Oct 5 05:33:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5-userdata-shm.mount: Deactivated successfully. Oct 5 05:33:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32133 DF PROTO=TCP SPT=59432 DPT=9882 SEQ=2796591488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764879C0000000001030307) Oct 5 05:33:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:33:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46532 DF PROTO=TCP SPT=35892 DPT=9100 SEQ=4031743700 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7648FB60000000001030307) Oct 5 05:33:28 localhost podman[237943]: 2025-10-05 09:33:28.462278385 +0000 UTC m=+0.630799897 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:33:29 localhost podman[237943]: 2025-10-05 09:33:29.849226833 +0000 UTC m=+2.017748295 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:33:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:33:30 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:33:30 localhost podman[237968]: 2025-10-05 09:33:30.166475274 +0000 UTC m=+0.085175546 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:33:30 localhost podman[237968]: 2025-10-05 09:33:30.220710232 +0000 UTC m=+0.139410524 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:33:30 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:33:30 localhost podman[237664]: 2025-10-05 09:33:30.241891566 +0000 UTC m=+5.490438336 container cleanup c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:33:30 localhost podman[237664]: nova_compute Oct 5 05:33:30 localhost podman[237676]: 2025-10-05 09:33:30.244867623 +0000 UTC m=+5.107830015 container cleanup c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:33:30 localhost podman[237987]: 2025-10-05 09:33:30.325548622 +0000 UTC m=+0.053186631 container cleanup c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_managed=true) Oct 5 05:33:30 localhost podman[237987]: nova_compute Oct 5 05:33:30 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Oct 5 05:33:30 localhost systemd[1]: Stopped nova_compute container. Oct 5 05:33:30 localhost systemd[1]: Starting nova_compute container... Oct 5 05:33:30 localhost systemd[1]: tmp-crun.7g91jF.mount: Deactivated successfully. Oct 5 05:33:30 localhost systemd[1]: Started libcrun container. Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:30 localhost podman[237998]: 2025-10-05 09:33:30.470625994 +0000 UTC m=+0.117018540 container init c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:33:30 localhost podman[237998]: 2025-10-05 09:33:30.479098595 +0000 UTC m=+0.125491141 container start c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:33:30 localhost podman[237998]: nova_compute Oct 5 05:33:30 localhost nova_compute[238014]: + sudo -E kolla_set_configs Oct 5 05:33:30 localhost systemd[1]: Started nova_compute container. Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Validating config file Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying service configuration files Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /etc/ceph Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Creating directory /etc/ceph Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/ceph Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Writing out command to execute Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:30 localhost nova_compute[238014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:33:30 localhost nova_compute[238014]: ++ cat /run_command Oct 5 05:33:30 localhost nova_compute[238014]: + CMD=nova-compute Oct 5 05:33:30 localhost nova_compute[238014]: + ARGS= Oct 5 05:33:30 localhost nova_compute[238014]: + sudo kolla_copy_cacerts Oct 5 05:33:30 localhost nova_compute[238014]: + [[ ! -n '' ]] Oct 5 05:33:30 localhost nova_compute[238014]: + . kolla_extend_start Oct 5 05:33:30 localhost nova_compute[238014]: Running command: 'nova-compute' Oct 5 05:33:30 localhost nova_compute[238014]: + echo 'Running command: '\''nova-compute'\''' Oct 5 05:33:30 localhost nova_compute[238014]: + umask 0022 Oct 5 05:33:30 localhost nova_compute[238014]: + exec nova-compute Oct 5 05:33:31 localhost python3.9[238135]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:33:31 localhost systemd[1]: Started libpod-conmon-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7.scope. Oct 5 05:33:31 localhost systemd[1]: Started libcrun container. Oct 5 05:33:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:33:31 localhost podman[238161]: 2025-10-05 09:33:31.688151334 +0000 UTC m=+0.125820590 container init 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Oct 5 05:33:31 localhost podman[238161]: 2025-10-05 09:33:31.698062523 +0000 UTC m=+0.135731779 container start 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible) Oct 5 05:33:31 localhost python3.9[238135]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Applying nova statedir ownership Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928 Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7bff446e28da7b7609613334d4f266c2377bdec4e8e9a595eeb621178e5df9fb Oct 5 05:33:31 localhost nova_compute_init[238181]: INFO:nova_statedir:Nova statedir ownership complete Oct 5 05:33:31 localhost systemd[1]: libpod-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7.scope: Deactivated successfully. Oct 5 05:33:31 localhost podman[238182]: 2025-10-05 09:33:31.76952175 +0000 UTC m=+0.054054923 container died 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true) Oct 5 05:33:31 localhost podman[238193]: 2025-10-05 09:33:31.847894369 +0000 UTC m=+0.076935002 container cleanup 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:33:31 localhost systemd[1]: libpod-conmon-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7.scope: Deactivated successfully. Oct 5 05:33:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52423 DF PROTO=TCP SPT=47482 DPT=9102 SEQ=1875859538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7649EF60000000001030307) Oct 5 05:33:32 localhost systemd[1]: var-lib-containers-storage-overlay-dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3-merged.mount: Deactivated successfully. Oct 5 05:33:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7-userdata-shm.mount: Deactivated successfully. Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.233 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.233 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.233 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.233 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.345 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.368 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:32 localhost systemd[1]: session-55.scope: Deactivated successfully. Oct 5 05:33:32 localhost systemd[1]: session-55.scope: Consumed 2min 33.255s CPU time. Oct 5 05:33:32 localhost systemd-logind[760]: Session 55 logged out. Waiting for processes to exit. Oct 5 05:33:32 localhost systemd-logind[760]: Removed session 55. Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.849 2 INFO nova.virt.driver [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.957 2 INFO nova.compute.provider_config [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.967 2 WARNING nova.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.968 2 DEBUG oslo_concurrency.lockutils [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.968 2 DEBUG oslo_concurrency.lockutils [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.968 2 DEBUG oslo_concurrency.lockutils [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.969 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.969 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.969 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.969 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.970 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.970 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.970 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.970 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.970 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.971 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.971 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.971 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.971 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.971 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.972 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.972 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.972 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.972 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.973 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.973 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] console_host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.973 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.973 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.974 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.974 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.974 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.974 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.974 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.975 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.975 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.975 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.975 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.976 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.976 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.976 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.976 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.977 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.977 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.977 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.977 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.977 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.978 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.978 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.978 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.978 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.978 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.979 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.979 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.979 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.979 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.980 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.980 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.980 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.980 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.981 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.981 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.981 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.981 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.982 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.982 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.982 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.982 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.982 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.983 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.983 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.983 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.983 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.983 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.984 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.984 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.984 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.984 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.984 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.985 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.986 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.987 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.988 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.989 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.990 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.991 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.992 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.993 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.994 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.995 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.996 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.997 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.997 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.997 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.997 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.998 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:32 localhost nova_compute[238014]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:32.999 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.000 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.001 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.002 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.003 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.003 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.003 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.004 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.005 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.006 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.006 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.006 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.007 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.008 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.008 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.008 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.009 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.009 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.009 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.010 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.011 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.011 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.011 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.012 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.013 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.014 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.015 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.016 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.017 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.018 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.019 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.020 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.021 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.022 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.023 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.024 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.025 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.026 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.027 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.028 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.029 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.030 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.031 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.032 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.033 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.034 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.035 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.036 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.037 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.038 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.039 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.040 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.041 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.042 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.043 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.044 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.045 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.046 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.047 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.048 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.049 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.050 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.051 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.052 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.053 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.054 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.055 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.056 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.057 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.058 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.059 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.060 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.061 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.062 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.063 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.064 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.065 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.066 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.067 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.068 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.069 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.070 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.071 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.072 2 WARNING oslo_config.cfg [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 5 05:33:33 localhost nova_compute[238014]: live_migration_uri is deprecated for removal in favor of two other options that Oct 5 05:33:33 localhost nova_compute[238014]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 5 05:33:33 localhost nova_compute[238014]: and ``live_migration_inbound_addr`` respectively. Oct 5 05:33:33 localhost nova_compute[238014]: ). Its value may be silently ignored in the future.#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.072 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.073 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.074 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rbd_secret_uuid = 659062ac-50b4-5607-b699-3105da7f55ee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.075 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.076 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.077 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.078 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.079 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.080 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.081 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.082 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.083 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.084 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.085 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.086 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.087 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.088 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.089 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.090 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.091 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.092 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.093 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.094 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.095 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.096 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.097 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.098 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.099 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.100 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.101 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.102 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.103 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.104 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.105 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.106 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.107 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.108 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.109 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.110 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.111 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.112 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.113 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.114 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.115 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.116 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.117 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.118 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.119 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.120 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.121 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.122 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.123 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.124 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.125 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.126 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.127 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.128 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.129 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.130 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.131 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.132 2 DEBUG oslo_service.service [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.133 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.144 2 INFO nova.virt.node [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Determined node identity 36221146-244b-49ab-8700-5471fa19d0c5 from /var/lib/nova/compute_id#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.144 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.145 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.145 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.145 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.154 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.156 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.157 2 INFO nova.virt.libvirt.driver [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Connection event '1' reason 'None'#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.163 2 INFO nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Libvirt host capabilities Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 26eb4766-c662-4233-bdfd-7faae464b2de Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: x86_64 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238014]: AMD Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: tcp Oct 5 05:33:33 localhost nova_compute[238014]: rdma Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 16116612 Oct 5 05:33:33 localhost nova_compute[238014]: 4029153 Oct 5 05:33:33 localhost nova_compute[238014]: 0 Oct 5 05:33:33 localhost nova_compute[238014]: 0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: selinux Oct 5 05:33:33 localhost nova_compute[238014]: 0 Oct 5 05:33:33 localhost nova_compute[238014]: system_u:system_r:svirt_t:s0 Oct 5 05:33:33 localhost nova_compute[238014]: system_u:system_r:svirt_tcg_t:s0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: dac Oct 5 05:33:33 localhost nova_compute[238014]: 0 Oct 5 05:33:33 localhost nova_compute[238014]: +107:+107 Oct 5 05:33:33 localhost nova_compute[238014]: +107:+107 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: hvm Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 32 Oct 5 05:33:33 localhost nova_compute[238014]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238014]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: q35 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.4.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.5.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.3.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.4.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.2.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.2.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.0.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.0.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.1.0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: hvm Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 64 Oct 5 05:33:33 localhost nova_compute[238014]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238014]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: q35 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.4.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.5.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.3.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.4.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.2.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.2.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.0.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.0.0 Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel8.1.0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: #033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.168 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.168 2 DEBUG nova.virt.libvirt.volume.mount [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.173 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238014]: kvm Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: i686 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: rom Oct 5 05:33:33 localhost nova_compute[238014]: pflash Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: yes Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: AMD Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 486 Oct 5 05:33:33 localhost nova_compute[238014]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Conroe Oct 5 05:33:33 localhost nova_compute[238014]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Penryn Oct 5 05:33:33 localhost nova_compute[238014]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Westmere Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238014]: athlon Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: kvm32 Oct 5 05:33:33 localhost nova_compute[238014]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: n270 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pentium Oct 5 05:33:33 localhost nova_compute[238014]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: phenom Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu32 Oct 5 05:33:33 localhost nova_compute[238014]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: file Oct 5 05:33:33 localhost nova_compute[238014]: anonymous Oct 5 05:33:33 localhost nova_compute[238014]: memfd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: disk Oct 5 05:33:33 localhost nova_compute[238014]: cdrom Oct 5 05:33:33 localhost nova_compute[238014]: floppy Oct 5 05:33:33 localhost nova_compute[238014]: lun Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: fdc Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: sata Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: vnc Oct 5 05:33:33 localhost nova_compute[238014]: egl-headless Oct 5 05:33:33 localhost nova_compute[238014]: dbus Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: subsystem Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: mandatory Oct 5 05:33:33 localhost nova_compute[238014]: requisite Oct 5 05:33:33 localhost nova_compute[238014]: optional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: pci Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: random Oct 5 05:33:33 localhost nova_compute[238014]: egd Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: path Oct 5 05:33:33 localhost nova_compute[238014]: handle Oct 5 05:33:33 localhost nova_compute[238014]: virtiofs Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238014]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: emulator Oct 5 05:33:33 localhost nova_compute[238014]: external Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 2.0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pty Oct 5 05:33:33 localhost nova_compute[238014]: unix Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: passt Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: isa Oct 5 05:33:33 localhost nova_compute[238014]: hyperv Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: relaxed Oct 5 05:33:33 localhost nova_compute[238014]: vapic Oct 5 05:33:33 localhost nova_compute[238014]: spinlocks Oct 5 05:33:33 localhost nova_compute[238014]: vpindex Oct 5 05:33:33 localhost nova_compute[238014]: runtime Oct 5 05:33:33 localhost nova_compute[238014]: synic Oct 5 05:33:33 localhost nova_compute[238014]: stimer Oct 5 05:33:33 localhost nova_compute[238014]: reset Oct 5 05:33:33 localhost nova_compute[238014]: vendor_id Oct 5 05:33:33 localhost nova_compute[238014]: frequencies Oct 5 05:33:33 localhost nova_compute[238014]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238014]: tlbflush Oct 5 05:33:33 localhost nova_compute[238014]: ipi Oct 5 05:33:33 localhost nova_compute[238014]: avic Oct 5 05:33:33 localhost nova_compute[238014]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238014]: xmm_input Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.178 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238014]: kvm Oct 5 05:33:33 localhost nova_compute[238014]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: i686 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: rom Oct 5 05:33:33 localhost nova_compute[238014]: pflash Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: yes Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: AMD Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 486 Oct 5 05:33:33 localhost nova_compute[238014]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Conroe Oct 5 05:33:33 localhost nova_compute[238014]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Penryn Oct 5 05:33:33 localhost nova_compute[238014]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Westmere Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238014]: athlon Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: kvm32 Oct 5 05:33:33 localhost nova_compute[238014]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: n270 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pentium Oct 5 05:33:33 localhost nova_compute[238014]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: phenom Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu32 Oct 5 05:33:33 localhost nova_compute[238014]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: file Oct 5 05:33:33 localhost nova_compute[238014]: anonymous Oct 5 05:33:33 localhost nova_compute[238014]: memfd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: disk Oct 5 05:33:33 localhost nova_compute[238014]: cdrom Oct 5 05:33:33 localhost nova_compute[238014]: floppy Oct 5 05:33:33 localhost nova_compute[238014]: lun Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: ide Oct 5 05:33:33 localhost nova_compute[238014]: fdc Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: sata Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: vnc Oct 5 05:33:33 localhost nova_compute[238014]: egl-headless Oct 5 05:33:33 localhost nova_compute[238014]: dbus Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: subsystem Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: mandatory Oct 5 05:33:33 localhost nova_compute[238014]: requisite Oct 5 05:33:33 localhost nova_compute[238014]: optional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: pci Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: random Oct 5 05:33:33 localhost nova_compute[238014]: egd Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: path Oct 5 05:33:33 localhost nova_compute[238014]: handle Oct 5 05:33:33 localhost nova_compute[238014]: virtiofs Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238014]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: emulator Oct 5 05:33:33 localhost nova_compute[238014]: external Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 2.0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pty Oct 5 05:33:33 localhost nova_compute[238014]: unix Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: passt Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: isa Oct 5 05:33:33 localhost nova_compute[238014]: hyperv Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: relaxed Oct 5 05:33:33 localhost nova_compute[238014]: vapic Oct 5 05:33:33 localhost nova_compute[238014]: spinlocks Oct 5 05:33:33 localhost nova_compute[238014]: vpindex Oct 5 05:33:33 localhost nova_compute[238014]: runtime Oct 5 05:33:33 localhost nova_compute[238014]: synic Oct 5 05:33:33 localhost nova_compute[238014]: stimer Oct 5 05:33:33 localhost nova_compute[238014]: reset Oct 5 05:33:33 localhost nova_compute[238014]: vendor_id Oct 5 05:33:33 localhost nova_compute[238014]: frequencies Oct 5 05:33:33 localhost nova_compute[238014]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238014]: tlbflush Oct 5 05:33:33 localhost nova_compute[238014]: ipi Oct 5 05:33:33 localhost nova_compute[238014]: avic Oct 5 05:33:33 localhost nova_compute[238014]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238014]: xmm_input Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.231 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.234 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238014]: kvm Oct 5 05:33:33 localhost nova_compute[238014]: pc-q35-rhel9.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: x86_64 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: efi Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: rom Oct 5 05:33:33 localhost nova_compute[238014]: pflash Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: yes Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: yes Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: AMD Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 486 Oct 5 05:33:33 localhost nova_compute[238014]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Conroe Oct 5 05:33:33 localhost nova_compute[238014]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Penryn Oct 5 05:33:33 localhost nova_compute[238014]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Westmere Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238014]: athlon Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: kvm32 Oct 5 05:33:33 localhost nova_compute[238014]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: n270 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pentium Oct 5 05:33:33 localhost nova_compute[238014]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: phenom Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu32 Oct 5 05:33:33 localhost nova_compute[238014]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: file Oct 5 05:33:33 localhost nova_compute[238014]: anonymous Oct 5 05:33:33 localhost nova_compute[238014]: memfd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: disk Oct 5 05:33:33 localhost nova_compute[238014]: cdrom Oct 5 05:33:33 localhost nova_compute[238014]: floppy Oct 5 05:33:33 localhost nova_compute[238014]: lun Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: fdc Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: sata Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: vnc Oct 5 05:33:33 localhost nova_compute[238014]: egl-headless Oct 5 05:33:33 localhost nova_compute[238014]: dbus Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: subsystem Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: mandatory Oct 5 05:33:33 localhost nova_compute[238014]: requisite Oct 5 05:33:33 localhost nova_compute[238014]: optional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: pci Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: random Oct 5 05:33:33 localhost nova_compute[238014]: egd Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: path Oct 5 05:33:33 localhost nova_compute[238014]: handle Oct 5 05:33:33 localhost nova_compute[238014]: virtiofs Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238014]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: emulator Oct 5 05:33:33 localhost nova_compute[238014]: external Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 2.0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pty Oct 5 05:33:33 localhost nova_compute[238014]: unix Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: passt Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: isa Oct 5 05:33:33 localhost nova_compute[238014]: hyperv Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: relaxed Oct 5 05:33:33 localhost nova_compute[238014]: vapic Oct 5 05:33:33 localhost nova_compute[238014]: spinlocks Oct 5 05:33:33 localhost nova_compute[238014]: vpindex Oct 5 05:33:33 localhost nova_compute[238014]: runtime Oct 5 05:33:33 localhost nova_compute[238014]: synic Oct 5 05:33:33 localhost nova_compute[238014]: stimer Oct 5 05:33:33 localhost nova_compute[238014]: reset Oct 5 05:33:33 localhost nova_compute[238014]: vendor_id Oct 5 05:33:33 localhost nova_compute[238014]: frequencies Oct 5 05:33:33 localhost nova_compute[238014]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238014]: tlbflush Oct 5 05:33:33 localhost nova_compute[238014]: ipi Oct 5 05:33:33 localhost nova_compute[238014]: avic Oct 5 05:33:33 localhost nova_compute[238014]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238014]: xmm_input Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.293 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/libexec/qemu-kvm Oct 5 05:33:33 localhost nova_compute[238014]: kvm Oct 5 05:33:33 localhost nova_compute[238014]: pc-i440fx-rhel7.6.0 Oct 5 05:33:33 localhost nova_compute[238014]: x86_64 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: rom Oct 5 05:33:33 localhost nova_compute[238014]: pflash Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: yes Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: no Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: on Oct 5 05:33:33 localhost nova_compute[238014]: off Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: AMD Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 486 Oct 5 05:33:33 localhost nova_compute[238014]: 486-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Broadwell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cascadelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Conroe Oct 5 05:33:33 localhost nova_compute[238014]: Conroe-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Cooperlake-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Denverton-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Dhyana-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Genoa-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-IBPB Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Milan-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-Rome-v4 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v1 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v2 Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: EPYC-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: GraniteRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Haswell-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-noTSX Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v6 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Icelake-Server-v7 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: IvyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: KnightsMill-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Nehalem-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G1-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G4-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Opteron_G5-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Penryn Oct 5 05:33:33 localhost nova_compute[238014]: Penryn-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: SandyBridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SapphireRapids-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: SierraForest-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Client-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-noTSX-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Skylake-Server-v5 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v2 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v3 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Snowridge-v4 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Westmere Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-IBRS Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Westmere-v2 Oct 5 05:33:33 localhost nova_compute[238014]: athlon Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: athlon-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: core2duo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: coreduo-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: kvm32 Oct 5 05:33:33 localhost nova_compute[238014]: kvm32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64 Oct 5 05:33:33 localhost nova_compute[238014]: kvm64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: n270 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: n270-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pentium Oct 5 05:33:33 localhost nova_compute[238014]: pentium-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2 Oct 5 05:33:33 localhost nova_compute[238014]: pentium2-v1 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3 Oct 5 05:33:33 localhost nova_compute[238014]: pentium3-v1 Oct 5 05:33:33 localhost nova_compute[238014]: phenom Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: phenom-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu32 Oct 5 05:33:33 localhost nova_compute[238014]: qemu32-v1 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64 Oct 5 05:33:33 localhost nova_compute[238014]: qemu64-v1 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: file Oct 5 05:33:33 localhost nova_compute[238014]: anonymous Oct 5 05:33:33 localhost nova_compute[238014]: memfd Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: disk Oct 5 05:33:33 localhost nova_compute[238014]: cdrom Oct 5 05:33:33 localhost nova_compute[238014]: floppy Oct 5 05:33:33 localhost nova_compute[238014]: lun Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: ide Oct 5 05:33:33 localhost nova_compute[238014]: fdc Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: sata Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: vnc Oct 5 05:33:33 localhost nova_compute[238014]: egl-headless Oct 5 05:33:33 localhost nova_compute[238014]: dbus Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: subsystem Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: mandatory Oct 5 05:33:33 localhost nova_compute[238014]: requisite Oct 5 05:33:33 localhost nova_compute[238014]: optional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: pci Oct 5 05:33:33 localhost nova_compute[238014]: scsi Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: virtio Oct 5 05:33:33 localhost nova_compute[238014]: virtio-transitional Oct 5 05:33:33 localhost nova_compute[238014]: virtio-non-transitional Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: random Oct 5 05:33:33 localhost nova_compute[238014]: egd Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: path Oct 5 05:33:33 localhost nova_compute[238014]: handle Oct 5 05:33:33 localhost nova_compute[238014]: virtiofs Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: tpm-tis Oct 5 05:33:33 localhost nova_compute[238014]: tpm-crb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: emulator Oct 5 05:33:33 localhost nova_compute[238014]: external Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: 2.0 Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: usb Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: pty Oct 5 05:33:33 localhost nova_compute[238014]: unix Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: qemu Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: builtin Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: default Oct 5 05:33:33 localhost nova_compute[238014]: passt Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: isa Oct 5 05:33:33 localhost nova_compute[238014]: hyperv Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: relaxed Oct 5 05:33:33 localhost nova_compute[238014]: vapic Oct 5 05:33:33 localhost nova_compute[238014]: spinlocks Oct 5 05:33:33 localhost nova_compute[238014]: vpindex Oct 5 05:33:33 localhost nova_compute[238014]: runtime Oct 5 05:33:33 localhost nova_compute[238014]: synic Oct 5 05:33:33 localhost nova_compute[238014]: stimer Oct 5 05:33:33 localhost nova_compute[238014]: reset Oct 5 05:33:33 localhost nova_compute[238014]: vendor_id Oct 5 05:33:33 localhost nova_compute[238014]: frequencies Oct 5 05:33:33 localhost nova_compute[238014]: reenlightenment Oct 5 05:33:33 localhost nova_compute[238014]: tlbflush Oct 5 05:33:33 localhost nova_compute[238014]: ipi Oct 5 05:33:33 localhost nova_compute[238014]: avic Oct 5 05:33:33 localhost nova_compute[238014]: emsr_bitmap Oct 5 05:33:33 localhost nova_compute[238014]: xmm_input Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: Oct 5 05:33:33 localhost nova_compute[238014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.342 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.342 2 INFO nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Secure Boot support detected#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.344 2 INFO nova.virt.libvirt.driver [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.344 2 INFO nova.virt.libvirt.driver [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.353 2 DEBUG nova.virt.libvirt.driver [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.379 2 INFO nova.virt.node [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Determined node identity 36221146-244b-49ab-8700-5471fa19d0c5 from /var/lib/nova/compute_id#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.411 2 DEBUG nova.compute.manager [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Verified node 36221146-244b-49ab-8700-5471fa19d0c5 matches my host np0005471152.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.497 2 INFO nova.compute.manager [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.788 2 DEBUG oslo_concurrency.lockutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.789 2 DEBUG oslo_concurrency.lockutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.789 2 DEBUG oslo_concurrency.lockutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.790 2 DEBUG nova.compute.resource_tracker [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:33:33 localhost nova_compute[238014]: 2025-10-05 09:33:33.790 2 DEBUG oslo_concurrency.processutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47406 DF PROTO=TCP SPT=53532 DPT=9101 SEQ=873210496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764A6E30000000001030307) Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.239 2 DEBUG oslo_concurrency.processutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.450 2 WARNING nova.virt.libvirt.driver [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.452 2 DEBUG nova.compute.resource_tracker [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13580MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.453 2 DEBUG oslo_concurrency.lockutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.453 2 DEBUG oslo_concurrency.lockutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.814 2 DEBUG nova.compute.resource_tracker [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.814 2 DEBUG nova.compute.resource_tracker [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.837 2 DEBUG nova.scheduler.client.report [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.855 2 DEBUG nova.scheduler.client.report [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.856 2 DEBUG nova.compute.provider_tree [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.883 2 DEBUG nova.scheduler.client.report [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.906 2 DEBUG nova.scheduler.client.report [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:33:34 localhost nova_compute[238014]: 2025-10-05 09:33:34.935 2 DEBUG oslo_concurrency.processutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.383 2 DEBUG oslo_concurrency.processutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.389 2 DEBUG nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 5 05:33:35 localhost nova_compute[238014]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.389 2 INFO nova.virt.libvirt.host [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.390 2 DEBUG nova.compute.provider_tree [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.391 2 DEBUG nova.virt.libvirt.driver [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.419 2 DEBUG nova.scheduler.client.report [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.497 2 DEBUG nova.compute.provider_tree [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Updating resource provider 36221146-244b-49ab-8700-5471fa19d0c5 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.534 2 DEBUG nova.compute.resource_tracker [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.534 2 DEBUG oslo_concurrency.lockutils [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.535 2 DEBUG nova.service [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.629 2 DEBUG nova.service [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.629 2 DEBUG nova.servicegroup.drivers.db [None req-fb103ddd-870c-4d2d-baa6-2daaf121aa7e - - - - - -] DB_Driver: join new ServiceGroup member np0005471152.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.630 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:33:35 localhost nova_compute[238014]: 2025-10-05 09:33:35.650 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:33:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47408 DF PROTO=TCP SPT=53532 DPT=9101 SEQ=873210496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764B2F60000000001030307) Oct 5 05:33:38 localhost sshd[238300]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:33:38 localhost systemd-logind[760]: New session 58 of user zuul. Oct 5 05:33:38 localhost systemd[1]: Started Session 58 of User zuul. Oct 5 05:33:39 localhost python3.9[238411]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:33:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47409 DF PROTO=TCP SPT=53532 DPT=9101 SEQ=873210496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764C2B70000000001030307) Oct 5 05:33:41 localhost python3.9[238611]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:33:41 localhost systemd[1]: Reloading. Oct 5 05:33:41 localhost systemd-sysv-generator[238641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:41 localhost systemd-rc-local-generator[238634]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:42 localhost python3.9[238754]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:33:42 localhost network[238771]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:33:42 localhost network[238772]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:33:42 localhost network[238773]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:33:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:46 localhost python3.9[239009]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38908 DF PROTO=TCP SPT=43178 DPT=9105 SEQ=1851653527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764D88A0000000001030307) Oct 5 05:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38909 DF PROTO=TCP SPT=43178 DPT=9105 SEQ=1851653527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764DC760000000001030307) Oct 5 05:33:48 localhost python3.9[239120]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:48 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Oct 5 05:33:48 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:33:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:48 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:33:49 localhost python3.9[239231]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38910 DF PROTO=TCP SPT=43178 DPT=9105 SEQ=1851653527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764E4760000000001030307) Oct 5 05:33:50 localhost python3.9[239341]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:33:51 localhost python3.9[239451]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:33:52 localhost python3.9[239561]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:33:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:33:52 localhost systemd[1]: Reloading. Oct 5 05:33:52 localhost podman[239563]: 2025-10-05 09:33:52.135591107 +0000 UTC m=+0.090465747 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 5 05:33:52 localhost systemd-rc-local-generator[239606]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:33:52 localhost systemd-sysv-generator[239611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:33:52 localhost podman[239563]: 2025-10-05 09:33:52.170152803 +0000 UTC m=+0.125027443 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:33:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:33:52 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:33:53 localhost python3.9[239725]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:33:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38911 DF PROTO=TCP SPT=43178 DPT=9105 SEQ=1851653527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764F4360000000001030307) Oct 5 05:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:33:54 localhost podman[239837]: 2025-10-05 09:33:54.798832407 +0000 UTC m=+0.086379989 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:33:54 localhost podman[239837]: 2025-10-05 09:33:54.810472336 +0000 UTC m=+0.098019988 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, config_id=multipathd) Oct 5 05:33:54 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:33:54 localhost python3.9[239836]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:33:55 localhost python3.9[239963]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:33:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27252 DF PROTO=TCP SPT=38302 DPT=9882 SEQ=829787586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC764FCCC0000000001030307) Oct 5 05:33:56 localhost python3.9[240073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:33:57 localhost python3.9[240159]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656835.9763453-361-124548468096884/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=75ee96d307f9b0c66ca7024842804c8784dd5f65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:33:57 localhost python3.9[240269]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Oct 5 05:33:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43054 DF PROTO=TCP SPT=56180 DPT=9100 SEQ=2588688091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76504F70000000001030307) Oct 5 05:33:59 localhost python3.9[240379]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Oct 5 05:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:34:00 localhost systemd[1]: tmp-crun.3r80Ak.mount: Deactivated successfully. Oct 5 05:34:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:34:00 localhost podman[240491]: 2025-10-05 09:34:00.253422471 +0000 UTC m=+0.099982800 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:34:00 localhost systemd[1]: tmp-crun.YCMqX5.mount: Deactivated successfully. Oct 5 05:34:00 localhost podman[240491]: 2025-10-05 09:34:00.332871706 +0000 UTC m=+0.179432065 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:34:00 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:34:00 localhost python3.9[240490]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Oct 5 05:34:00 localhost podman[240506]: 2025-10-05 09:34:00.337307233 +0000 UTC m=+0.078978113 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001) Oct 5 05:34:00 localhost podman[240506]: 2025-10-05 09:34:00.421075952 +0000 UTC m=+0.162746792 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:34:00 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:34:01 localhost python3.9[240649]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005471152.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Oct 5 05:34:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61831 DF PROTO=TCP SPT=53974 DPT=9102 SEQ=3722710526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76513F70000000001030307) Oct 5 05:34:03 localhost python3.9[240765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:03 localhost python3.9[240851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759656842.880807-566-29482965532845/.source.conf _original_basename=ceilometer.conf follow=False checksum=307739b44452a4a1b48764f90c8d60cb24d1ca87 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47675 DF PROTO=TCP SPT=40186 DPT=9101 SEQ=3190776969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7651C130000000001030307) Oct 5 05:34:04 localhost python3.9[240959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:04 localhost python3.9[241045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759656844.0113273-566-173324102724413/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:05 localhost python3.9[241153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:06 localhost python3.9[241239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759656845.1122684-566-27849070527873/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:06 localhost python3.9[241347]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47677 DF PROTO=TCP SPT=40186 DPT=9101 SEQ=3190776969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76528370000000001030307) Oct 5 05:34:07 localhost python3.9[241455]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:07 localhost python3.9[241563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:08 localhost python3.9[241649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656847.5454628-742-87027272667746/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:09 localhost python3.9[241757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:09 localhost python3.9[241812]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:10 localhost python3.9[241920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:10 localhost python3.9[242006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656849.6933055-742-238898689826404/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47678 DF PROTO=TCP SPT=40186 DPT=9101 SEQ=3190776969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76537F70000000001030307) Oct 5 05:34:11 localhost python3.9[242114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:11 localhost python3.9[242200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656850.8385427-742-169886958018658/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:12 localhost python3.9[242308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:13 localhost python3.9[242394]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656851.9807117-742-182883906303772/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:14 localhost python3.9[242502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:14 localhost python3.9[242588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656853.721843-742-134701385010372/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:15 localhost python3.9[242696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:16 localhost python3.9[242782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656854.8570516-742-83035173896584/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55821 DF PROTO=TCP SPT=53940 DPT=9105 SEQ=1846826743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7654DBA0000000001030307) Oct 5 05:34:16 localhost python3.9[242890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:17 localhost python3.9[242976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656856.417626-742-2383952013010/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55822 DF PROTO=TCP SPT=53940 DPT=9105 SEQ=1846826743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76551B60000000001030307) Oct 5 05:34:17 localhost python3.9[243084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:18 localhost python3.9[243170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656857.5209203-742-126630426322853/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:19 localhost python3.9[243278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:19 localhost python3.9[243364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656858.62038-742-10922473168184/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55823 DF PROTO=TCP SPT=53940 DPT=9105 SEQ=1846826743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76559B60000000001030307) Oct 5 05:34:20 localhost python3.9[243472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:34:20.369 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:34:20.369 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:34:20.369 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:20 localhost python3.9[243558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759656859.7011986-742-252518230817993/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:21 localhost python3.9[243668]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:22 localhost python3.9[243778]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:34:22 localhost systemd[1]: Reloading. Oct 5 05:34:22 localhost podman[243780]: 2025-10-05 09:34:22.536689849 +0000 UTC m=+0.064623224 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:34:22 localhost podman[243780]: 2025-10-05 09:34:22.571226973 +0000 UTC m=+0.099160358 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:34:22 localhost systemd-rc-local-generator[243819]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:22 localhost systemd-sysv-generator[243823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:22 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:34:22 localhost systemd[1]: Listening on Podman API Socket. Oct 5 05:34:23 localhost python3.9[243945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55824 DF PROTO=TCP SPT=53940 DPT=9105 SEQ=1846826743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76569760000000001030307) Oct 5 05:34:24 localhost python3.9[244033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656863.3106346-1258-232337795987076/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:24 localhost python3.9[244088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:34:25 localhost podman[244177]: 2025-10-05 09:34:25.271633238 +0000 UTC m=+0.075752947 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:34:25 localhost podman[244177]: 2025-10-05 09:34:25.315270273 +0000 UTC m=+0.119389982 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001) Oct 5 05:34:25 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:34:25 localhost python3.9[244176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656863.3106346-1258-232337795987076/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8682 DF PROTO=TCP SPT=58856 DPT=9100 SEQ=1759163404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76571F60000000001030307) Oct 5 05:34:27 localhost python3.9[244304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Oct 5 05:34:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8683 DF PROTO=TCP SPT=58856 DPT=9100 SEQ=1759163404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76579F60000000001030307) Oct 5 05:34:28 localhost python3.9[244414]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:34:29 localhost python3[244524]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:34:30 localhost python3[244524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "189fb56a112f774faa3c37fc532d9af434502871e8ddbdfe438285d2328ac9f5",#012 "Digest": "sha256:9aef12e39064170db87cb85373e2d10a5b618c8a9e6f50c6e9db72c91a337fc2",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9aef12e39064170db87cb85373e2d10a5b618c8a9e6f50c6e9db72c91a337fc2"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:23:37.889226851Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505025280,#012 "VirtualSize": 505025280,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:393f6536d9533e4890767f39ad657c20a3212b85c896ad1265872ed467d9b400",#012 "sha256:e3cd21c5b0533deb516897a0fc70f87f5bbfee3dc8cfa1ae1c00914563e8021d"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Oct 5 05:34:30 localhost podman[244576]: 2025-10-05 09:34:30.347229095 +0000 UTC m=+0.077878614 container remove 528441c9a71186ebe512becb670135ede17d639fc3fe228a614230c1c7171948 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ceilometer-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:45:33, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7ae8f92d3eaef9724f650e9e8c537f24'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=032b792693069cded21b3a74ee4baa1db4887fb3, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Oct 5 05:34:30 localhost python3[244524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Oct 5 05:34:30 localhost podman[244589]: Oct 5 05:34:30 localhost podman[244589]: 2025-10-05 09:34:30.469436691 +0000 UTC m=+0.103628236 container create b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:34:30 localhost podman[244589]: 2025-10-05 09:34:30.420550307 +0000 UTC m=+0.054741892 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Oct 5 05:34:30 localhost python3[244524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Oct 5 05:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:34:30 localhost podman[244647]: 2025-10-05 09:34:30.929439574 +0000 UTC m=+0.085557696 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:34:30 localhost podman[244646]: 2025-10-05 09:34:30.979898681 +0000 UTC m=+0.140255546 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:34:31 localhost podman[244647]: 2025-10-05 09:34:31.001279518 +0000 UTC m=+0.157397640 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Oct 5 05:34:31 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:34:31 localhost podman[244646]: 2025-10-05 09:34:31.01721972 +0000 UTC m=+0.177576585 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:34:31 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:34:31 localhost python3.9[244779]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46976 DF PROTO=TCP SPT=56890 DPT=9102 SEQ=749390954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76589360000000001030307) Oct 5 05:34:32 localhost python3.9[244891]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.378 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.379 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.379 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.380 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.395 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.395 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.396 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.396 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.397 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.397 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.398 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.399 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.399 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.417 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.417 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.418 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.418 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.419 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:34:32 localhost nova_compute[238014]: 2025-10-05 09:34:32.849 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:34:32 localhost python3.9[245020]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656872.2694204-1450-168007574313916/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.018 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.019 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13559MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.019 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.019 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.083 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.084 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.103 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.566 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.570 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.593 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.594 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:34:33 localhost nova_compute[238014]: 2025-10-05 09:34:33.594 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:34:33 localhost python3.9[245097]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:34:33 localhost systemd[1]: Reloading. Oct 5 05:34:33 localhost systemd-sysv-generator[245126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:33 localhost systemd-rc-local-generator[245123]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27686 DF PROTO=TCP SPT=50376 DPT=9101 SEQ=3614786314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76591420000000001030307) Oct 5 05:34:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:34 localhost python3.9[245190]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:34:34 localhost systemd[1]: Reloading. Oct 5 05:34:34 localhost systemd-rc-local-generator[245216]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:34 localhost systemd-sysv-generator[245222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:35 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 5 05:34:35 localhost systemd[1]: tmp-crun.5LDcvS.mount: Deactivated successfully. Oct 5 05:34:35 localhost systemd[1]: Started libcrun container. Oct 5 05:34:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e59e31ad23f94ede12404e3c0febe312feb335b9531a8f6905b3057a28741ff/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e59e31ad23f94ede12404e3c0febe312feb335b9531a8f6905b3057a28741ff/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:34:35 localhost podman[245230]: 2025-10-05 09:34:35.344579647 +0000 UTC m=+0.156190708 container init b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + sudo -E kolla_set_configs Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:34:35 localhost podman[245230]: 2025-10-05 09:34:35.378749672 +0000 UTC m=+0.190360733 container start b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:34:35 localhost podman[245230]: ceilometer_agent_compute Oct 5 05:34:35 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Validating config file Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Copying service configuration files Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: INFO:__main__:Writing out command to execute Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: ++ cat /run_command Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + ARGS= Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + sudo kolla_copy_cacerts Oct 5 05:34:35 localhost podman[245252]: 2025-10-05 09:34:35.458847483 +0000 UTC m=+0.073432516 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + [[ ! -n '' ]] Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + . kolla_extend_start Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + umask 0022 Oct 5 05:34:35 localhost ceilometer_agent_compute[245244]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Oct 5 05:34:35 localhost podman[245252]: 2025-10-05 09:34:35.493178433 +0000 UTC m=+0.107763466 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:34:35 localhost podman[245252]: unhealthy Oct 5 05:34:35 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:34:35 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:34:36 localhost python3.9[245385]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.199 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.199 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.199 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.199 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.200 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.201 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.202 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.204 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.205 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.206 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.207 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.210 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.211 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.212 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.213 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.214 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.215 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.216 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.217 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.218 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.219 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.220 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.220 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.237 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.239 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.240 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.333 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.393 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.393 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.393 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.393 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.393 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.394 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.395 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.396 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.397 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.398 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.399 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.400 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.401 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.402 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.403 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.404 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.405 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.406 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.407 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.408 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.409 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.410 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.411 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.414 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.421 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.424 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.424 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.425 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.425 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.425 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.425 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.425 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.425 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.426 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.427 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.427 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.427 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.427 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.427 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.427 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.428 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.428 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.428 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:36 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:36.428 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27688 DF PROTO=TCP SPT=50376 DPT=9101 SEQ=3614786314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7659D370000000001030307) Oct 5 05:34:37 localhost systemd[1]: Stopping ceilometer_agent_compute container... Oct 5 05:34:37 localhost systemd[1]: tmp-crun.AiTRwO.mount: Deactivated successfully. Oct 5 05:34:37 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:37.302 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Oct 5 05:34:37 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:37.403 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Oct 5 05:34:37 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:37.403 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Oct 5 05:34:37 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:37.403 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Oct 5 05:34:37 localhost journal[237275]: End of file while reading data: Input/output error Oct 5 05:34:37 localhost journal[237275]: End of file while reading data: Input/output error Oct 5 05:34:37 localhost ceilometer_agent_compute[245244]: 2025-10-05 09:34:37.411 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Oct 5 05:34:37 localhost systemd[1]: libpod-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.scope: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: libpod-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.scope: Consumed 1.217s CPU time. Oct 5 05:34:37 localhost podman[245395]: 2025-10-05 09:34:37.558502746 +0000 UTC m=+0.325859292 container died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 05:34:37 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.timer: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:34:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c-userdata-shm.mount: Deactivated successfully. Oct 5 05:34:37 localhost podman[245395]: 2025-10-05 09:34:37.609931438 +0000 UTC m=+0.377287994 container cleanup b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Oct 5 05:34:37 localhost podman[245395]: ceilometer_agent_compute Oct 5 05:34:37 localhost podman[245424]: 2025-10-05 09:34:37.706043763 +0000 UTC m=+0.061159630 container cleanup b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:34:37 localhost podman[245424]: ceilometer_agent_compute Oct 5 05:34:37 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Oct 5 05:34:37 localhost systemd[1]: Stopped ceilometer_agent_compute container. Oct 5 05:34:37 localhost systemd[1]: Starting ceilometer_agent_compute container... Oct 5 05:34:37 localhost systemd[1]: Started libcrun container. Oct 5 05:34:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e59e31ad23f94ede12404e3c0febe312feb335b9531a8f6905b3057a28741ff/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e59e31ad23f94ede12404e3c0febe312feb335b9531a8f6905b3057a28741ff/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Oct 5 05:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:34:37 localhost podman[245437]: 2025-10-05 09:34:37.885285831 +0000 UTC m=+0.143138562 container init b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + sudo -E kolla_set_configs Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:34:37 localhost podman[245437]: 2025-10-05 09:34:37.920259138 +0000 UTC m=+0.178111869 container start b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:34:37 localhost podman[245437]: ceilometer_agent_compute Oct 5 05:34:37 localhost systemd[1]: Started ceilometer_agent_compute container. Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Validating config file Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Copying service configuration files Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: INFO:__main__:Writing out command to execute Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: ++ cat /run_command Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + ARGS= Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + sudo kolla_copy_cacerts Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: sudo: unable to send audit message: Operation not permitted Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + [[ ! -n '' ]] Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + . kolla_extend_start Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + umask 0022 Oct 5 05:34:37 localhost ceilometer_agent_compute[245451]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Oct 5 05:34:38 localhost podman[245460]: 2025-10-05 09:34:38.015015848 +0000 UTC m=+0.092196563 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:34:38 localhost podman[245460]: 2025-10-05 09:34:38.048100084 +0000 UTC m=+0.125280769 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:34:38 localhost podman[245460]: unhealthy Oct 5 05:34:38 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:34:38 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:34:38 localhost python3.9[245590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.676 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.677 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.678 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.679 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.680 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.681 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.682 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.683 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.684 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.685 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.686 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.687 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.688 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.689 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.690 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.708 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.710 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.711 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.728 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.849 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.850 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.851 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.852 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.853 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.854 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.855 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.856 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.857 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.858 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.859 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.860 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.861 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.862 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.863 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.864 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.865 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.866 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.867 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.870 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.876 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:34:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:34:39 localhost python3.9[245684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656878.1680815-1547-39436676625888/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:40 localhost python3.9[245794]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Oct 5 05:34:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27689 DF PROTO=TCP SPT=50376 DPT=9101 SEQ=3614786314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765ACF60000000001030307) Oct 5 05:34:41 localhost python3.9[245904]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:34:42 localhost python3[246081]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:34:42 localhost podman[246120]: Oct 5 05:34:42 localhost podman[246120]: 2025-10-05 09:34:42.495656467 +0000 UTC m=+0.082603197 container create ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors ) Oct 5 05:34:42 localhost podman[246120]: 2025-10-05 09:34:42.456597098 +0000 UTC m=+0.043543828 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Oct 5 05:34:42 localhost python3[246081]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Oct 5 05:34:43 localhost python3.9[246284]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:44 localhost python3.9[246396]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:44 localhost python3.9[246505]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656884.2838445-1705-6919305975934/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:45 localhost python3.9[246560]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:34:45 localhost systemd[1]: Reloading. Oct 5 05:34:45 localhost systemd-rc-local-generator[246587]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:45 localhost systemd-sysv-generator[246590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:46 localhost python3.9[246651]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:34:46 localhost systemd[1]: Reloading. Oct 5 05:34:46 localhost systemd-rc-local-generator[246676]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:46 localhost systemd-sysv-generator[246679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10832 DF PROTO=TCP SPT=49870 DPT=9105 SEQ=1434225036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765C2EA0000000001030307) Oct 5 05:34:46 localhost systemd[1]: Starting node_exporter container... Oct 5 05:34:46 localhost systemd[1]: tmp-crun.LDkvhP.mount: Deactivated successfully. Oct 5 05:34:46 localhost systemd[1]: Started libcrun container. Oct 5 05:34:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:34:47 localhost podman[246691]: 2025-10-05 09:34:47.02813921 +0000 UTC m=+0.138193611 container init ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.043Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.043Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.043Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.043Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.043Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.044Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.044Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.044Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.044Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=arp Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=bcache Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=bonding Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=btrfs Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=conntrack Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=cpu Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=cpufreq Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=diskstats Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=edac Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=fibrechannel Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=filefd Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=filesystem Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=infiniband Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=ipvs Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=loadavg Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=mdadm Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=meminfo Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=netclass Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=netdev Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=netstat Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=nfs Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=nfsd Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=nvme Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=schedstat Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=sockstat Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=softnet Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=systemd Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=tapestats Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=udp_queues Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=vmstat Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=xfs Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.045Z caller=node_exporter.go:117 level=info collector=zfs Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.046Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Oct 5 05:34:47 localhost node_exporter[246705]: ts=2025-10-05T09:34:47.046Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Oct 5 05:34:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:34:47 localhost podman[246691]: 2025-10-05 09:34:47.054276068 +0000 UTC m=+0.164330479 container start ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:34:47 localhost podman[246691]: node_exporter Oct 5 05:34:47 localhost systemd[1]: Started node_exporter container. Oct 5 05:34:47 localhost podman[246714]: 2025-10-05 09:34:47.139998956 +0000 UTC m=+0.080823050 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:34:47 localhost podman[246714]: 2025-10-05 09:34:47.157298932 +0000 UTC m=+0.098123016 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:34:47 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:34:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10833 DF PROTO=TCP SPT=49870 DPT=9105 SEQ=1434225036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765C6F60000000001030307) Oct 5 05:34:48 localhost python3.9[246845]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:34:48 localhost systemd[1]: Stopping node_exporter container... Oct 5 05:34:48 localhost systemd[1]: libpod-ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.scope: Deactivated successfully. Oct 5 05:34:48 localhost podman[246849]: 2025-10-05 09:34:48.867680023 +0000 UTC m=+0.079534027 container died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:34:48 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.timer: Deactivated successfully. Oct 5 05:34:48 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:34:48 localhost podman[246849]: 2025-10-05 09:34:48.913854788 +0000 UTC m=+0.125708782 container cleanup ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:34:48 localhost podman[246849]: node_exporter Oct 5 05:34:48 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 5 05:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-123e9f86dc50b2398481f645d33956c08bb8fbe78e407a710993ec35ad57739f-merged.mount: Deactivated successfully. Oct 5 05:34:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e-userdata-shm.mount: Deactivated successfully. Oct 5 05:34:48 localhost podman[246876]: 2025-10-05 09:34:48.9989401 +0000 UTC m=+0.047584564 container cleanup ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:34:48 localhost podman[246876]: node_exporter Oct 5 05:34:49 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Oct 5 05:34:49 localhost systemd[1]: Stopped node_exporter container. Oct 5 05:34:49 localhost systemd[1]: Starting node_exporter container... Oct 5 05:34:49 localhost systemd[1]: Started libcrun container. Oct 5 05:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:34:49 localhost podman[246887]: 2025-10-05 09:34:49.186975683 +0000 UTC m=+0.148604455 container init ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.203Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.204Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=arp Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=bcache Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=bonding Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=btrfs Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=conntrack Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=cpu Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=cpufreq Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=diskstats Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=edac Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=fibrechannel Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=filefd Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=filesystem Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=infiniband Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=ipvs Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=loadavg Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=mdadm Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=meminfo Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=netclass Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=netdev Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=netstat Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=nfs Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=nfsd Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=nvme Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=schedstat Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=sockstat Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=softnet Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=systemd Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=tapestats Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=udp_queues Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=vmstat Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=xfs Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=node_exporter.go:117 level=info collector=zfs Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Oct 5 05:34:49 localhost node_exporter[246900]: ts=2025-10-05T09:34:49.205Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Oct 5 05:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:34:49 localhost podman[246887]: 2025-10-05 09:34:49.219655473 +0000 UTC m=+0.181284235 container start ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:34:49 localhost podman[246887]: node_exporter Oct 5 05:34:49 localhost systemd[1]: Started node_exporter container. Oct 5 05:34:49 localhost podman[246909]: 2025-10-05 09:34:49.310293031 +0000 UTC m=+0.081893508 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:34:49 localhost podman[246909]: 2025-10-05 09:34:49.315691203 +0000 UTC m=+0.087291610 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:34:49 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:34:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10834 DF PROTO=TCP SPT=49870 DPT=9105 SEQ=1434225036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765CEF60000000001030307) Oct 5 05:34:50 localhost python3.9[247040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:34:50 localhost python3.9[247128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656889.6001737-1802-205947442334929/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:34:52 localhost python3.9[247238]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False Oct 5 05:34:52 localhost python3.9[247348]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:34:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:34:53 localhost podman[247459]: 2025-10-05 09:34:53.534747551 +0000 UTC m=+0.090056613 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:34:53 localhost podman[247459]: 2025-10-05 09:34:53.546535221 +0000 UTC m=+0.101844313 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:34:53 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:34:53 localhost python3[247458]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:34:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10835 DF PROTO=TCP SPT=49870 DPT=9105 SEQ=1434225036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765DEB60000000001030307) Oct 5 05:34:55 localhost podman[247491]: 2025-10-05 09:34:53.842763644 +0000 UTC m=+0.048328245 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Oct 5 05:34:55 localhost podman[247561]: Oct 5 05:34:55 localhost podman[247561]: 2025-10-05 09:34:55.742511152 +0000 UTC m=+0.075681894 container create ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi ) Oct 5 05:34:55 localhost podman[247561]: 2025-10-05 09:34:55.705564459 +0000 UTC m=+0.038735241 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Oct 5 05:34:55 localhost python3[247458]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Oct 5 05:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:34:55 localhost podman[247586]: 2025-10-05 09:34:55.970981979 +0000 UTC m=+0.133608299 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd) Oct 5 05:34:56 localhost podman[247586]: 2025-10-05 09:34:56.010162401 +0000 UTC m=+0.172788691 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:34:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38161 DF PROTO=TCP SPT=36970 DPT=9882 SEQ=1547597461 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765E72B0000000001030307) Oct 5 05:34:56 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:34:56 localhost python3.9[247724]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:34:57 localhost python3.9[247836]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19230 DF PROTO=TCP SPT=46716 DPT=9100 SEQ=3635600529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765EF370000000001030307) Oct 5 05:34:58 localhost python3.9[247945]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656897.6199453-1960-98887338183394/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:34:58 localhost python3.9[248000]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:34:58 localhost systemd[1]: Reloading. Oct 5 05:34:58 localhost systemd-rc-local-generator[248023]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:34:58 localhost systemd-sysv-generator[248026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:34:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:34:59 localhost python3.9[248091]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:34:59 localhost systemd[1]: Reloading. Oct 5 05:35:00 localhost systemd-rc-local-generator[248116]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:35:00 localhost systemd-sysv-generator[248121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:35:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:35:00 localhost systemd[1]: Starting podman_exporter container... Oct 5 05:35:00 localhost systemd[1]: Started libcrun container. Oct 5 05:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:35:00 localhost podman[248132]: 2025-10-05 09:35:00.444691494 +0000 UTC m=+0.147080954 container init ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:35:00 localhost podman_exporter[248146]: ts=2025-10-05T09:35:00.463Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Oct 5 05:35:00 localhost podman_exporter[248146]: ts=2025-10-05T09:35:00.463Z caller=exporter.go:69 level=info msg=metrics enhanced=false Oct 5 05:35:00 localhost podman_exporter[248146]: ts=2025-10-05T09:35:00.463Z caller=handler.go:94 level=info msg="enabled collectors" Oct 5 05:35:00 localhost podman_exporter[248146]: ts=2025-10-05T09:35:00.463Z caller=handler.go:105 level=info collector=container Oct 5 05:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:35:00 localhost podman[248132]: 2025-10-05 09:35:00.481241427 +0000 UTC m=+0.183630867 container start ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:35:00 localhost podman[248132]: podman_exporter Oct 5 05:35:00 localhost systemd[1]: Starting Podman API Service... Oct 5 05:35:00 localhost systemd[1]: Started Podman API Service. Oct 5 05:35:00 localhost systemd[1]: Started podman_exporter container. Oct 5 05:35:00 localhost podman[248157]: time="2025-10-05T09:35:00Z" level=info msg="/usr/bin/podman filtering at log level info" Oct 5 05:35:00 localhost podman[248156]: 2025-10-05 09:35:00.572197033 +0000 UTC m=+0.085473403 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:35:00 localhost podman[248157]: time="2025-10-05T09:35:00Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Oct 5 05:35:00 localhost podman[248157]: time="2025-10-05T09:35:00Z" level=info msg="Setting parallel job count to 25" Oct 5 05:35:00 localhost podman[248157]: time="2025-10-05T09:35:00Z" level=info msg="Using systemd socket activation to determine API endpoint" Oct 5 05:35:00 localhost podman[248157]: time="2025-10-05T09:35:00Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\"" Oct 5 05:35:00 localhost podman[248156]: 2025-10-05 09:35:00.610150112 +0000 UTC m=+0.123426472 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:35:00 localhost podman[248157]: @ - - [05/Oct/2025:09:35:00 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Oct 5 05:35:00 localhost podman[248157]: time="2025-10-05T09:35:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:35:00 localhost podman[248156]: unhealthy Oct 5 05:35:00 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:35:00 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:35:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:35:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:35:01 localhost podman[248305]: 2025-10-05 09:35:01.169136066 +0000 UTC m=+0.082784952 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:35:01 localhost podman[248303]: 2025-10-05 09:35:01.14157089 +0000 UTC m=+0.061717417 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 05:35:01 localhost podman[248303]: 2025-10-05 09:35:01.227258377 +0000 UTC m=+0.147404854 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:35:01 localhost podman[248305]: 2025-10-05 09:35:01.278299072 +0000 UTC m=+0.191947998 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, managed_by=edpm_ansible) Oct 5 05:35:01 localhost python3.9[248304]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:35:01 localhost systemd[1]: Stopping podman_exporter container... Oct 5 05:35:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5582 DF PROTO=TCP SPT=50406 DPT=9102 SEQ=215817546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC765FE760000000001030307) Oct 5 05:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:03 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:35:03 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:35:03 localhost podman[248157]: @ - - [05/Oct/2025:09:35:00 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 7886 "" "Go-http-client/1.1" Oct 5 05:35:03 localhost systemd[1]: libpod-ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.scope: Deactivated successfully. Oct 5 05:35:03 localhost podman[248347]: 2025-10-05 09:35:03.541121393 +0000 UTC m=+2.096145933 container died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:35:03 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.timer: Deactivated successfully. Oct 5 05:35:03 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:35:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114-userdata-shm.mount: Deactivated successfully. Oct 5 05:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29114 DF PROTO=TCP SPT=51442 DPT=9101 SEQ=721239791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76606730000000001030307) Oct 5 05:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:35:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5665 writes, 24K keys, 5665 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5665 writes, 725 syncs, 7.81 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4 writes, 8 keys, 4 commit groups, 1.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 4 writes, 2 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-1c5f505ae94ec7e72f42b4def0ea22b022d87adc63c32b9724037c36133ba095-merged.mount: Deactivated successfully. Oct 5 05:35:05 localhost podman[248347]: 2025-10-05 09:35:05.430686803 +0000 UTC m=+3.985711283 container cleanup ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:35:05 localhost podman[248347]: podman_exporter Oct 5 05:35:05 localhost podman[248362]: 2025-10-05 09:35:05.449440277 +0000 UTC m=+1.895810326 container cleanup ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:06 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 5 05:35:06 localhost podman[248374]: 2025-10-05 09:35:06.595126084 +0000 UTC m=+0.050949243 container cleanup ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:35:06 localhost podman[248374]: podman_exporter Oct 5 05:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29116 DF PROTO=TCP SPT=51442 DPT=9101 SEQ=721239791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76612760000000001030307) Oct 5 05:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:07 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'. Oct 5 05:35:07 localhost systemd[1]: Stopped podman_exporter container. Oct 5 05:35:07 localhost systemd[1]: Starting podman_exporter container... Oct 5 05:35:07 localhost systemd[1]: Started libcrun container. Oct 5 05:35:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:35:07 localhost podman[248387]: 2025-10-05 09:35:07.614335679 +0000 UTC m=+0.411983982 container init ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:35:07 localhost podman[248157]: @ - - [05/Oct/2025:09:35:07 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Oct 5 05:35:07 localhost podman[248157]: time="2025-10-05T09:35:07Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:35:07 localhost podman_exporter[248402]: ts=2025-10-05T09:35:07.627Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Oct 5 05:35:07 localhost podman_exporter[248402]: ts=2025-10-05T09:35:07.627Z caller=exporter.go:69 level=info msg=metrics enhanced=false Oct 5 05:35:07 localhost podman_exporter[248402]: ts=2025-10-05T09:35:07.627Z caller=handler.go:94 level=info msg="enabled collectors" Oct 5 05:35:07 localhost podman_exporter[248402]: ts=2025-10-05T09:35:07.627Z caller=handler.go:105 level=info collector=container Oct 5 05:35:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:35:07 localhost podman[248387]: 2025-10-05 09:35:07.650727588 +0000 UTC m=+0.448375881 container start ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:35:07 localhost podman[248387]: podman_exporter Oct 5 05:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a-merged.mount: Deactivated successfully. Oct 5 05:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a-merged.mount: Deactivated successfully. Oct 5 05:35:10 localhost systemd[1]: Started podman_exporter container. Oct 5 05:35:10 localhost podman[248412]: 2025-10-05 09:35:10.170878997 +0000 UTC m=+2.513155976 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:35:10 localhost podman[248412]: 2025-10-05 09:35:10.179038152 +0000 UTC m=+2.521315111 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:35:10 localhost podman[248412]: unhealthy Oct 5 05:35:10 localhost podman[248424]: 2025-10-05 09:35:10.225970738 +0000 UTC m=+2.144374202 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:35:10 localhost podman[248424]: 2025-10-05 09:35:10.257083548 +0000 UTC m=+2.175486992 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 05:35:10 localhost podman[248424]: unhealthy Oct 5 05:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:35:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29117 DF PROTO=TCP SPT=51442 DPT=9101 SEQ=721239791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76622360000000001030307) Oct 5 05:35:11 localhost python3.9[248561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:12 localhost python3.9[248649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759656911.4391701-2057-159719623666316/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Oct 5 05:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:12 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:35:12 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:35:12 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:35:12 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:35:13 localhost python3.9[248760]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False Oct 5 05:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:14 localhost python3.9[248870]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-dfeb5e97bc5c93c6dd9c6b5d4562ebcdbcb5141c059d0f33a6487f50c5da8817-merged.mount: Deactivated successfully. Oct 5 05:35:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:15 localhost python3[248980]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-dfeb5e97bc5c93c6dd9c6b5d4562ebcdbcb5141c059d0f33a6487f50c5da8817-merged.mount: Deactivated successfully. Oct 5 05:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58342 DF PROTO=TCP SPT=59610 DPT=9105 SEQ=3065518117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766381A0000000001030307) Oct 5 05:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58343 DF PROTO=TCP SPT=59610 DPT=9105 SEQ=3065518117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7663C360000000001030307) Oct 5 05:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:35:19 localhost podman[249007]: 2025-10-05 09:35:19.689372509 +0000 UTC m=+0.199909936 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:35:19 localhost podman[249007]: 2025-10-05 09:35:19.694005882 +0000 UTC m=+0.204543289 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58344 DF PROTO=TCP SPT=59610 DPT=9105 SEQ=3065518117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76644360000000001030307) Oct 5 05:35:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:35:20.370 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:35:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:35:20.371 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:35:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:35:20.371 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-9f5909b51d6f5176a9af02a80a42aa4e763d46fcd7e41075f5953671b5582f8a-merged.mount: Deactivated successfully. Oct 5 05:35:21 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:35:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58345 DF PROTO=TCP SPT=59610 DPT=9105 SEQ=3065518117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76653F70000000001030307) Oct 5 05:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:35:23 localhost podman[249043]: 2025-10-05 09:35:23.95298131 +0000 UTC m=+0.207337562 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:35:23 localhost podman[249043]: 2025-10-05 09:35:23.985310222 +0000 UTC m=+0.239666494 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 5 05:35:23 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:24 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:35:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-dfeb5e97bc5c93c6dd9c6b5d4562ebcdbcb5141c059d0f33a6487f50c5da8817-merged.mount: Deactivated successfully. Oct 5 05:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-dfeb5e97bc5c93c6dd9c6b5d4562ebcdbcb5141c059d0f33a6487f50c5da8817-merged.mount: Deactivated successfully. Oct 5 05:35:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49960 DF PROTO=TCP SPT=60548 DPT=9882 SEQ=3777821468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7665C610000000001030307) Oct 5 05:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-56fd012cd53db160963f1dee06cf4da8c3422d34817ca588642ef798e92735f5-merged.mount: Deactivated successfully. Oct 5 05:35:26 localhost systemd[1]: tmp-crun.bwYigN.mount: Deactivated successfully. Oct 5 05:35:26 localhost podman[249074]: 2025-10-05 09:35:26.812662863 +0000 UTC m=+0.079027332 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3) Oct 5 05:35:26 localhost podman[249074]: 2025-10-05 09:35:26.84821717 +0000 UTC m=+0.114581629 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:35:27 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:35:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58646 DF PROTO=TCP SPT=33920 DPT=9100 SEQ=3084647837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76664770000000001030307) Oct 5 05:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:29 localhost podman[248157]: time="2025-10-05T09:35:29Z" level=error msg="Getting root fs size for \"083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd\": getting diffsize of layer \"919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94\" and its parent \"948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca\": unmounting layer 948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca: replacing mount point \"/var/lib/containers/storage/overlay/948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca/merged\": device or resource busy" Oct 5 05:35:29 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:29 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55083 DF PROTO=TCP SPT=38062 DPT=9102 SEQ=9133447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76673B60000000001030307) Oct 5 05:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6-merged.mount: Deactivated successfully. Oct 5 05:35:32 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:35:33 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:33 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.587 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.655 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.656 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.656 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.667 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.667 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.667 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.667 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.667 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.667 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.681 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.681 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.681 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.682 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:35:33 localhost nova_compute[238014]: 2025-10-05 09:35:33.682 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:35:33 localhost podman[249105]: 2025-10-05 09:35:33.693038078 +0000 UTC m=+0.134714799 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:35:33 localhost podman[249104]: 2025-10-05 09:35:33.666795158 +0000 UTC m=+0.112198557 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent) Oct 5 05:35:33 localhost podman[249104]: 2025-10-05 09:35:33.746021244 +0000 UTC m=+0.191424653 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:35:33 localhost podman[249105]: 2025-10-05 09:35:33.800002086 +0000 UTC m=+0.241678807 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44432 DF PROTO=TCP SPT=34166 DPT=9101 SEQ=3678851531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7667BA30000000001030307) Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.076 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.274 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.276 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13189MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.276 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.277 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.599 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.600 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:35:34 localhost nova_compute[238014]: 2025-10-05 09:35:34.620 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.108 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.115 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.132 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.135 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.136 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-56fd012cd53db160963f1dee06cf4da8c3422d34817ca588642ef798e92735f5-merged.mount: Deactivated successfully. Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.846 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.847 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.847 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:35 localhost nova_compute[238014]: 2025-10-05 09:35:35.847 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:35:35 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:35:35 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:35:35 localhost podman[248994]: 2025-10-05 09:35:16.841388184 +0000 UTC m=+0.044464581 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Oct 5 05:35:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44434 DF PROTO=TCP SPT=34166 DPT=9101 SEQ=3678851531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76687B60000000001030307) Oct 5 05:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:38 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:38 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:40 localhost podman[248157]: time="2025-10-05T09:35:40Z" level=error msg="Getting root fs size for \"083450a98b4ec1f8438d2170a8a1035526b3080f9f5ad0f487aa11a6acd35fbd\": getting diffsize of layer \"948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer 948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca: replacing mount point \"/var/lib/containers/storage/overlay/948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca/merged\": device or resource busy" Oct 5 05:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44435 DF PROTO=TCP SPT=34166 DPT=9101 SEQ=3678851531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76697760000000001030307) Oct 5 05:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:35:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-cc7901b34e87d1545c3d13848f76cd466a17f5de88c76f001f972fb796a95aa6-merged.mount: Deactivated successfully. Oct 5 05:35:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:43 localhost podman[249242]: 2025-10-05 09:35:43.300081073 +0000 UTC m=+0.519741521 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:35:43 localhost podman[249243]: 2025-10-05 09:35:43.334171651 +0000 UTC m=+0.547347938 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:35:43 localhost podman[249242]: 2025-10-05 09:35:43.336378949 +0000 UTC m=+0.556039397 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:35:43 localhost podman[249243]: 2025-10-05 09:35:43.368497815 +0000 UTC m=+0.581674122 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:35:43 localhost podman[249243]: unhealthy Oct 5 05:35:43 localhost podman[249242]: unhealthy Oct 5 05:35:43 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:43 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:44 localhost podman[248157]: time="2025-10-05T09:35:44Z" level=error msg="Getting root fs size for \"0f2d106d0a37abacf0995812a0f15e484aec40b15058aa901296ec33a43a318f\": getting diffsize of layer \"f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy" Oct 5 05:35:44 localhost podman[249213]: 2025-10-05 09:35:40.902341377 +0000 UTC m=+0.049078743 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Oct 5 05:35:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:44 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:35:44 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:35:44 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:35:44 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-fd611533d0754c2b8855ffa9aefaf86645bfbe47724a0d11fe20ac2f596fdd7a-merged.mount: Deactivated successfully. Oct 5 05:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684-merged.mount: Deactivated successfully. Oct 5 05:35:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65306 DF PROTO=TCP SPT=33012 DPT=9105 SEQ=144308587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766AD4B0000000001030307) Oct 5 05:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684-merged.mount: Deactivated successfully. Oct 5 05:35:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65307 DF PROTO=TCP SPT=33012 DPT=9105 SEQ=144308587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766B1360000000001030307) Oct 5 05:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:35:49 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65308 DF PROTO=TCP SPT=33012 DPT=9105 SEQ=144308587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766B9360000000001030307) Oct 5 05:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:35:51 localhost podman[249354]: 2025-10-05 09:35:51.941714248 +0000 UTC m=+0.099423506 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:35:51 localhost podman[249354]: 2025-10-05 09:35:51.950520447 +0000 UTC m=+0.108229635 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:35:52 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:52 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:52 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:35:52 localhost podman[249213]: Oct 5 05:35:52 localhost podman[248157]: time="2025-10-05T09:35:52Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged: invalid argument" Oct 5 05:35:52 localhost podman[248157]: time="2025-10-05T09:35:52Z" level=error msg="Getting root fs size for \"0fe657b61dbf4764ec74485ea5fde086368c910f546386964552d5c523d24dfa\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": creating overlay mount to /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/IDN6N356LYE3OBITDBQNLPW6JL,upperdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/diff,workdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/work,nodev,metacopy=on\": no such file or directory" Oct 5 05:35:52 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:52 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:35:52 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:35:52 localhost podman[249213]: 2025-10-05 09:35:52.441370228 +0000 UTC m=+11.588107574 container create 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 05:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:35:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65309 DF PROTO=TCP SPT=33012 DPT=9105 SEQ=144308587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766C8F60000000001030307) Oct 5 05:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:35:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-d87447dd1fa9f694b89812f0cae7146141669ee7c42cff34f97ae344268ea684-merged.mount: Deactivated successfully. Oct 5 05:35:54 localhost python3[248980]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Oct 5 05:35:54 localhost podman[249377]: 2025-10-05 09:35:54.786969939 +0000 UTC m=+0.226064432 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:35:54 localhost podman[249377]: 2025-10-05 09:35:54.820120628 +0000 UTC m=+0.259215131 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2) Oct 5 05:35:55 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:35:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31718 DF PROTO=TCP SPT=40176 DPT=9882 SEQ=2446202167 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766D18C0000000001030307) Oct 5 05:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-77e4045be5c881139fd829799dfaed464fba2b2ef703554c7a184a66e7396587-merged.mount: Deactivated successfully. Oct 5 05:35:57 localhost podman[249405]: 2025-10-05 09:35:57.627920842 +0000 UTC m=+0.216660446 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:35:57 localhost podman[249405]: 2025-10-05 09:35:57.707379967 +0000 UTC m=+0.296119591 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:35:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60203 DF PROTO=TCP SPT=52516 DPT=9100 SEQ=3149081042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766D9B60000000001030307) Oct 5 05:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-5dec2b237273ccb78113c2b1c492ef164c4f5b231452e08517989bb84e3d4334-merged.mount: Deactivated successfully. Oct 5 05:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e-merged.mount: Deactivated successfully. Oct 5 05:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:00 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:00 localhost python3.9[249545]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:36:01 localhost python3.9[249657]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60618 DF PROTO=TCP SPT=51810 DPT=9102 SEQ=3107482673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766E8B70000000001030307) Oct 5 05:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478-merged.mount: Deactivated successfully. Oct 5 05:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-5dec2b237273ccb78113c2b1c492ef164c4f5b231452e08517989bb84e3d4334-merged.mount: Deactivated successfully. Oct 5 05:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost python3.9[249766]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759656962.8063982-2215-182277854680916/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478-merged.mount: Deactivated successfully. Oct 5 05:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44109 DF PROTO=TCP SPT=42876 DPT=9101 SEQ=3367136840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766F0D20000000001030307) Oct 5 05:36:04 localhost python3.9[249821]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:36:04 localhost systemd[1]: Reloading. Oct 5 05:36:04 localhost systemd-rc-local-generator[249848]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:36:04 localhost systemd-sysv-generator[249851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:36:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:05 localhost python3.9[249912]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:36:05 localhost systemd[1]: Reloading. Oct 5 05:36:05 localhost systemd-rc-local-generator[249939]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:36:05 localhost systemd-sysv-generator[249942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:36:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:36:05 localhost systemd[1]: Starting openstack_network_exporter container... Oct 5 05:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-77e4045be5c881139fd829799dfaed464fba2b2ef703554c7a184a66e7396587-merged.mount: Deactivated successfully. Oct 5 05:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-77e4045be5c881139fd829799dfaed464fba2b2ef703554c7a184a66e7396587-merged.mount: Deactivated successfully. Oct 5 05:36:07 localhost systemd[1]: Started libcrun container. Oct 5 05:36:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4724465f015f8d16c292214170ac92ae63ab71ca1cf138483dcea35227d555/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 05:36:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4724465f015f8d16c292214170ac92ae63ab71ca1cf138483dcea35227d555/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Oct 5 05:36:07 localhost podman[249965]: 2025-10-05 09:36:07.049833334 +0000 UTC m=+0.292684748 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44111 DF PROTO=TCP SPT=42876 DPT=9101 SEQ=3367136840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC766FCF60000000001030307) Oct 5 05:36:07 localhost podman[249965]: 2025-10-05 09:36:07.134427547 +0000 UTC m=+0.377278951 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:36:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:36:07 localhost podman[249953]: 2025-10-05 09:36:07.144260905 +0000 UTC m=+1.716303636 container init 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *bridge.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *coverage.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *datapath.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *iface.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *memory.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *ovnnorthd.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *ovn.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *ovsdbserver.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *pmd_perf.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *pmd_rxq.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: INFO 09:36:07 main.go:48: registering *vswitch.Collector Oct 5 05:36:07 localhost openstack_network_exporter[249988]: NOTICE 09:36:07 main.go:82: listening on http://:9105/metrics Oct 5 05:36:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:36:07 localhost podman[249953]: 2025-10-05 09:36:07.175143052 +0000 UTC m=+1.747185763 container start 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container) Oct 5 05:36:07 localhost podman[249953]: openstack_network_exporter Oct 5 05:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e-merged.mount: Deactivated successfully. Oct 5 05:36:08 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:36:08 localhost systemd[1]: Started openstack_network_exporter container. Oct 5 05:36:08 localhost podman[250015]: 2025-10-05 09:36:08.713001627 +0000 UTC m=+1.535885182 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9) Oct 5 05:36:08 localhost podman[250015]: 2025-10-05 09:36:08.744683106 +0000 UTC m=+1.567566661 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:36:08 localhost systemd[1]: tmp-crun.Gihwet.mount: Deactivated successfully. Oct 5 05:36:10 localhost python3.9[250146]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-5dec2b237273ccb78113c2b1c492ef164c4f5b231452e08517989bb84e3d4334-merged.mount: Deactivated successfully. Oct 5 05:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e-merged.mount: Deactivated successfully. Oct 5 05:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e-merged.mount: Deactivated successfully. Oct 5 05:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44112 DF PROTO=TCP SPT=42876 DPT=9101 SEQ=3367136840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7670CB60000000001030307) Oct 5 05:36:11 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:36:11 localhost systemd[1]: Stopping openstack_network_exporter container... Oct 5 05:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:36:11 localhost systemd[1]: libpod-9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.scope: Deactivated successfully. Oct 5 05:36:11 localhost podman[250150]: 2025-10-05 09:36:11.643069107 +0000 UTC m=+0.083355611 container died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:36:11 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.timer: Deactivated successfully. Oct 5 05:36:11 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:36:11 localhost systemd[1]: tmp-crun.vQHPCA.mount: Deactivated successfully. Oct 5 05:36:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd-userdata-shm.mount: Deactivated successfully. Oct 5 05:36:12 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:12 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478-merged.mount: Deactivated successfully. Oct 5 05:36:13 localhost podman[250150]: 2025-10-05 09:36:13.40540734 +0000 UTC m=+1.845693834 container cleanup 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, release=1755695350, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc.) Oct 5 05:36:13 localhost podman[250150]: openstack_network_exporter Oct 5 05:36:13 localhost systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Oct 5 05:36:13 localhost podman[250175]: 2025-10-05 09:36:13.495154924 +0000 UTC m=+0.062414624 container cleanup 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal) Oct 5 05:36:13 localhost podman[250175]: openstack_network_exporter Oct 5 05:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-5dec2b237273ccb78113c2b1c492ef164c4f5b231452e08517989bb84e3d4334-merged.mount: Deactivated successfully. Oct 5 05:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-5c4724465f015f8d16c292214170ac92ae63ab71ca1cf138483dcea35227d555-merged.mount: Deactivated successfully. Oct 5 05:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478-merged.mount: Deactivated successfully. Oct 5 05:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-742d30f08a388c298396549889c67e956a0883467079259a53d0a019a9ad0478-merged.mount: Deactivated successfully. Oct 5 05:36:15 localhost systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'. Oct 5 05:36:15 localhost systemd[1]: Stopped openstack_network_exporter container. Oct 5 05:36:15 localhost systemd[1]: Starting openstack_network_exporter container... Oct 5 05:36:15 localhost podman[250186]: 2025-10-05 09:36:15.474940164 +0000 UTC m=+1.142578347 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:36:15 localhost podman[250186]: 2025-10-05 09:36:15.479029224 +0000 UTC m=+1.146667397 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:36:15 localhost podman[250186]: unhealthy Oct 5 05:36:15 localhost podman[250187]: 2025-10-05 09:36:15.520431767 +0000 UTC m=+1.185261924 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:36:15 localhost podman[250187]: 2025-10-05 09:36:15.555853548 +0000 UTC m=+1.220683705 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:36:15 localhost podman[250187]: unhealthy Oct 5 05:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:16 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:36:16 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:36:16 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:36:16 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:36:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9417 DF PROTO=TCP SPT=37128 DPT=9105 SEQ=2222881052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767227B0000000001030307) Oct 5 05:36:16 localhost systemd[1]: Started libcrun container. Oct 5 05:36:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4724465f015f8d16c292214170ac92ae63ab71ca1cf138483dcea35227d555/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Oct 5 05:36:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4724465f015f8d16c292214170ac92ae63ab71ca1cf138483dcea35227d555/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Oct 5 05:36:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:36:17 localhost podman[250209]: 2025-10-05 09:36:17.018632797 +0000 UTC m=+1.565486776 container init 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *bridge.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *coverage.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *datapath.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *iface.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *memory.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *ovnnorthd.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *ovn.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *ovsdbserver.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *pmd_perf.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *pmd_rxq.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: INFO 09:36:17 main.go:48: registering *vswitch.Collector Oct 5 05:36:17 localhost openstack_network_exporter[250246]: NOTICE 09:36:17 main.go:82: listening on http://:9105/metrics Oct 5 05:36:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:36:17 localhost podman[250209]: 2025-10-05 09:36:17.051910039 +0000 UTC m=+1.598764018 container start 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal) Oct 5 05:36:17 localhost podman[250209]: openstack_network_exporter Oct 5 05:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:36:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9418 DF PROTO=TCP SPT=37128 DPT=9105 SEQ=2222881052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76726760000000001030307) Oct 5 05:36:19 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:36:19 localhost systemd[1]: var-lib-containers-storage-overlay-24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d-merged.mount: Deactivated successfully. Oct 5 05:36:19 localhost systemd[1]: Started openstack_network_exporter container. Oct 5 05:36:19 localhost podman[250256]: 2025-10-05 09:36:19.428962902 +0000 UTC m=+2.367002762 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:36:19 localhost podman[250256]: 2025-10-05 09:36:19.447147065 +0000 UTC m=+2.385186945 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9) Oct 5 05:36:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9419 DF PROTO=TCP SPT=37128 DPT=9105 SEQ=2222881052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7672E760000000001030307) Oct 5 05:36:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:36:20.371 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:36:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:36:20.372 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:36:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:36:20.372 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e-merged.mount: Deactivated successfully. Oct 5 05:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e-merged.mount: Deactivated successfully. Oct 5 05:36:21 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-f5944eec7fb469ae9b7574ded24c1a7fe3b9eaecc032f74894fb3b6f1ca0c38e-merged.mount: Deactivated successfully. Oct 5 05:36:21 localhost podman[249964]: 2025-10-05 09:36:21.191219542 +0000 UTC m=+14.438374023 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:36:21 localhost podman[249964]: 2025-10-05 09:36:21.222349956 +0000 UTC m=+14.469504467 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:21 localhost python3.9[250388]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:36:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9420 DF PROTO=TCP SPT=37128 DPT=9105 SEQ=2222881052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7673E370000000001030307) Oct 5 05:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:36:23 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:36:23 localhost podman[250406]: 2025-10-05 09:36:23.909276112 +0000 UTC m=+1.077707646 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:36:23 localhost podman[250406]: 2025-10-05 09:36:23.917777563 +0000 UTC m=+1.086209117 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:25 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:36:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21061 DF PROTO=TCP SPT=58238 DPT=9100 SEQ=2868727883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76746B70000000001030307) Oct 5 05:36:26 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:36:26 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:27 localhost podman[250429]: 2025-10-05 09:36:27.488656991 +0000 UTC m=+1.915332913 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 05:36:27 localhost podman[250429]: 2025-10-05 09:36:27.499153675 +0000 UTC m=+1.925829617 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e-merged.mount: Deactivated successfully. Oct 5 05:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21062 DF PROTO=TCP SPT=58238 DPT=9100 SEQ=2868727883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7674EB70000000001030307) Oct 5 05:36:28 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:36:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 5 05:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:36:30 localhost podman[250449]: 2025-10-05 09:36:30.413195511 +0000 UTC m=+0.082065586 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:36:30 localhost podman[250449]: 2025-10-05 09:36:30.448046526 +0000 UTC m=+0.116916611 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-d18dc2747c1d228beeff09121da02d0b7f69981323209f5388a795a036816caf-merged.mount: Deactivated successfully. Oct 5 05:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-d18dc2747c1d228beeff09121da02d0b7f69981323209f5388a795a036816caf-merged.mount: Deactivated successfully. Oct 5 05:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-24720245bb9699ab61f1e86276f8ec4cee100dcc70be97929daf5c438d551d0d-merged.mount: Deactivated successfully. Oct 5 05:36:31 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:36:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22321 DF PROTO=TCP SPT=57532 DPT=9102 SEQ=2925916927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7675DF60000000001030307) Oct 5 05:36:32 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:32 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:36:32 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:36:33 localhost nova_compute[238014]: 2025-10-05 09:36:33.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:33 localhost nova_compute[238014]: 2025-10-05 09:36:33.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:36:33 localhost nova_compute[238014]: 2025-10-05 09:36:33.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:36:33 localhost nova_compute[238014]: 2025-10-05 09:36:33.398 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:36:33 localhost nova_compute[238014]: 2025-10-05 09:36:33.399 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e-merged.mount: Deactivated successfully. Oct 5 05:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38597 DF PROTO=TCP SPT=45634 DPT=9101 SEQ=344106106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76766030000000001030307) Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.378 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.400 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.400 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.400 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.400 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.401 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:34 localhost nova_compute[238014]: 2025-10-05 09:36:34.894 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.077 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.078 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13230MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.078 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.079 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.134 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.135 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.163 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-8ae54ce9a5138d7aabeb9eaabe0dcb4afb1a3468b56e9908af6f1efab9669523-merged.mount: Deactivated successfully. Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.629 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.640 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.658 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.661 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:36:35 localhost nova_compute[238014]: 2025-10-05 09:36:35.661 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e-merged.mount: Deactivated successfully. Oct 5 05:36:36 localhost nova_compute[238014]: 2025-10-05 09:36:36.661 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:36 localhost nova_compute[238014]: 2025-10-05 09:36:36.662 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:36 localhost nova_compute[238014]: 2025-10-05 09:36:36.662 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 5 05:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:36:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38599 DF PROTO=TCP SPT=45634 DPT=9101 SEQ=344106106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76771F60000000001030307) Oct 5 05:36:37 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 5 05:36:37 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 5 05:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:36:38 localhost podman[250510]: 2025-10-05 09:36:38.796854106 +0000 UTC m=+0.085700105 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:36:38 localhost podman[250510]: 2025-10-05 09:36:38.84639744 +0000 UTC m=+0.135243389 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:36:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:36:39 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-d18dc2747c1d228beeff09121da02d0b7f69981323209f5388a795a036816caf-merged.mount: Deactivated successfully. Oct 5 05:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:36:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38600 DF PROTO=TCP SPT=45634 DPT=9101 SEQ=344106106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76781B60000000001030307) Oct 5 05:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:41 localhost podman[248157]: time="2025-10-05T09:36:41Z" level=error msg="Getting root fs size for \"36c1c246cd861cd516769d1a11468b1eb45cb90ce69e12dd51a89472c35c2491\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy" Oct 5 05:36:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-8ae54ce9a5138d7aabeb9eaabe0dcb4afb1a3468b56e9908af6f1efab9669523-merged.mount: Deactivated successfully. Oct 5 05:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-8ae54ce9a5138d7aabeb9eaabe0dcb4afb1a3468b56e9908af6f1efab9669523-merged.mount: Deactivated successfully. Oct 5 05:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 5 05:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-1db00d87bf07e127185efa8d09f579d03087a66661822459d6f980d1744528c7-merged.mount: Deactivated successfully. Oct 5 05:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-1db00d87bf07e127185efa8d09f579d03087a66661822459d6f980d1744528c7-merged.mount: Deactivated successfully. Oct 5 05:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 5 05:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:36:46 localhost podman[250536]: 2025-10-05 09:36:46.551910785 +0000 UTC m=+0.101384291 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 05:36:46 localhost podman[250536]: 2025-10-05 09:36:46.622280163 +0000 UTC m=+0.171753699 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:36:46 localhost podman[250536]: unhealthy Oct 5 05:36:46 localhost podman[250537]: 2025-10-05 09:36:46.63397982 +0000 UTC m=+0.179067037 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:36:46 localhost podman[250537]: 2025-10-05 09:36:46.669146364 +0000 UTC m=+0.214233531 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:36:46 localhost podman[250537]: unhealthy Oct 5 05:36:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64458 DF PROTO=TCP SPT=56916 DPT=9105 SEQ=1364990844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76797AA0000000001030307) Oct 5 05:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64459 DF PROTO=TCP SPT=56916 DPT=9105 SEQ=1364990844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7679BB60000000001030307) Oct 5 05:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:48 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:36:48 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:36:48 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:36:48 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:36:49 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:36:49 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:49 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:49 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64460 DF PROTO=TCP SPT=56916 DPT=9105 SEQ=1364990844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767A3B60000000001030307) Oct 5 05:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:36:51 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:36:51 localhost podman[250661]: 2025-10-05 09:36:51.908941111 +0000 UTC m=+0.076519636 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible) Oct 5 05:36:51 localhost podman[250661]: 2025-10-05 09:36:51.950632222 +0000 UTC m=+0.118210757 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully. Oct 5 05:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-1db00d87bf07e127185efa8d09f579d03087a66661822459d6f980d1744528c7-merged.mount: Deactivated successfully. Oct 5 05:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-1db00d87bf07e127185efa8d09f579d03087a66661822459d6f980d1744528c7-merged.mount: Deactivated successfully. Oct 5 05:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:53 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-79319d12525dee5bc50b02f4506bc7bc6e833cf5798b23ca8359393e14a5b8e7-merged.mount: Deactivated successfully. Oct 5 05:36:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64461 DF PROTO=TCP SPT=56916 DPT=9105 SEQ=1364990844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767B3760000000001030307) Oct 5 05:36:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:36:54 localhost podman[250681]: 2025-10-05 09:36:54.922145005 +0000 UTC m=+0.092151742 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:36:54 localhost podman[250681]: 2025-10-05 09:36:54.956376982 +0000 UTC m=+0.126383709 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14730 DF PROTO=TCP SPT=42360 DPT=9882 SEQ=127560037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767BBEC0000000001030307) Oct 5 05:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:36:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:36:56 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:36:56 localhost podman[250699]: 2025-10-05 09:36:56.193569493 +0000 UTC m=+0.084524763 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:36:56 localhost podman[250699]: 2025-10-05 09:36:56.204075816 +0000 UTC m=+0.095031106 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:58 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:36:58 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:36:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53688 DF PROTO=TCP SPT=50622 DPT=9100 SEQ=299408385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767C3F60000000001030307) Oct 5 05:36:58 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:36:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:36:58 localhost podman[250723]: 2025-10-05 09:36:58.851930255 +0000 UTC m=+0.100630131 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible) Oct 5 05:36:58 localhost podman[250723]: 2025-10-05 09:36:58.865537418 +0000 UTC m=+0.114237314 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:36:58 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:36:59 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:00 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:00 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:37:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7513 DF PROTO=TCP SPT=47048 DPT=9102 SEQ=2137950074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767D3360000000001030307) Oct 5 05:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3-merged.mount: Deactivated successfully. Oct 5 05:37:02 localhost systemd[1]: tmp-crun.LpgYdK.mount: Deactivated successfully. Oct 5 05:37:02 localhost podman[250743]: 2025-10-05 09:37:02.057342647 +0000 UTC m=+0.095252292 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:37:02 localhost podman[250743]: 2025-10-05 09:37:02.068113556 +0000 UTC m=+0.106023171 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:37:02 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-79319d12525dee5bc50b02f4506bc7bc6e833cf5798b23ca8359393e14a5b8e7-merged.mount: Deactivated successfully. Oct 5 05:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47143 DF PROTO=TCP SPT=37946 DPT=9101 SEQ=4080499248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767DB330000000001030307) Oct 5 05:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:37:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47145 DF PROTO=TCP SPT=37946 DPT=9101 SEQ=4080499248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767E7370000000001030307) Oct 5 05:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:09 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:37:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:37:09 localhost podman[250760]: 2025-10-05 09:37:09.916407053 +0000 UTC m=+0.089904312 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:37:09 localhost podman[250760]: 2025-10-05 09:37:09.954086091 +0000 UTC m=+0.127583360 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:37:09 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3-merged.mount: Deactivated successfully. Oct 5 05:37:10 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:37:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47146 DF PROTO=TCP SPT=37946 DPT=9101 SEQ=4080499248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC767F6F60000000001030307) Oct 5 05:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-7b5d9698f5e241817bc1ab20fc93517a066d97944c963cb3e8954ea8e4465d09-merged.mount: Deactivated successfully. Oct 5 05:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-1f399fda81bbe6240bca25723d60396a8f25e34829105df5d1e8b91fbce43961-merged.mount: Deactivated successfully. Oct 5 05:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:37:16 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:16 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30254 DF PROTO=TCP SPT=36224 DPT=9105 SEQ=136114949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7680CDB0000000001030307) Oct 5 05:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e-merged.mount: Deactivated successfully. Oct 5 05:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:37:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30255 DF PROTO=TCP SPT=36224 DPT=9105 SEQ=136114949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76810F70000000001030307) Oct 5 05:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 5 05:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 5 05:37:18 localhost podman[248157]: time="2025-10-05T09:37:18Z" level=error msg="Getting root fs size for \"531f0c82aa59c8d072c1584c7e97cc55b9d1090231811b9bb4aa437b11ee12a8\": getting diffsize of layer \"ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy" Oct 5 05:37:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:37:18 localhost podman[250786]: 2025-10-05 09:37:18.903204791 +0000 UTC m=+0.074334888 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:37:18 localhost podman[250786]: 2025-10-05 09:37:18.944245096 +0000 UTC m=+0.115375173 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:37:18 localhost podman[250786]: unhealthy Oct 5 05:37:18 localhost podman[250787]: 2025-10-05 09:37:18.960606251 +0000 UTC m=+0.129531761 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:37:18 localhost podman[250787]: 2025-10-05 09:37:18.998104493 +0000 UTC m=+0.167029963 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:37:19 localhost podman[250787]: unhealthy Oct 5 05:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:19 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:37:19 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:37:19 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:37:19 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:37:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30256 DF PROTO=TCP SPT=36224 DPT=9105 SEQ=136114949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76818F60000000001030307) Oct 5 05:37:20 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:37:20 localhost systemd[1]: var-lib-containers-storage-overlay-1f399fda81bbe6240bca25723d60396a8f25e34829105df5d1e8b91fbce43961-merged.mount: Deactivated successfully. Oct 5 05:37:20 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:20 localhost podman[248157]: time="2025-10-05T09:37:20Z" level=error msg="Getting root fs size for \"67eea6fde46235fe26e8314f0fcd3e09678f0221d9cdbe49b223e716a030ee39\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy" Oct 5 05:37:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:37:20.373 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:37:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:37:20.373 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:37:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:37:20.373 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 5 05:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-bcb7ced4bd7bd74e0a0f4ec2a0694dfa6707df5fca3b6302a69516f93b64f08f-merged.mount: Deactivated successfully. Oct 5 05:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-f098c0017d0da3f1457e04ccb48f16a39779d6b090c6b44cae8dda4d8a38938b-merged.mount: Deactivated successfully. Oct 5 05:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e-merged.mount: Deactivated successfully. Oct 5 05:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-55cb5c865e19b2b02f6ef1f708f2f72698cf3c59e99ebc5d3f66dd7a43867d0e-merged.mount: Deactivated successfully. Oct 5 05:37:23 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:23 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 5 05:37:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:37:23 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 5 05:37:23 localhost podman[250828]: 2025-10-05 09:37:23.559333662 +0000 UTC m=+0.088406244 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Oct 5 05:37:23 localhost podman[250828]: 2025-10-05 09:37:23.59475886 +0000 UTC m=+0.123831432 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git) Oct 5 05:37:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30257 DF PROTO=TCP SPT=36224 DPT=9105 SEQ=136114949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76828B60000000001030307) Oct 5 05:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:37:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:24 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51083 DF PROTO=TCP SPT=44220 DPT=9882 SEQ=1404014770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768311C0000000001030307) Oct 5 05:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:26 localhost podman[250847]: 2025-10-05 09:37:26.441469598 +0000 UTC m=+0.067534803 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:37:26 localhost podman[250847]: 2025-10-05 09:37:26.452128234 +0000 UTC m=+0.078193439 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Oct 5 05:37:26 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully. Oct 5 05:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-bcb7ced4bd7bd74e0a0f4ec2a0694dfa6707df5fca3b6302a69516f93b64f08f-merged.mount: Deactivated successfully. Oct 5 05:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47431 DF PROTO=TCP SPT=60362 DPT=9100 SEQ=739027862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76839360000000001030307) Oct 5 05:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:37:28 localhost systemd[1]: tmp-crun.HJQAIb.mount: Deactivated successfully. Oct 5 05:37:28 localhost podman[250865]: 2025-10-05 09:37:28.200641867 +0000 UTC m=+0.093376453 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:37:28 localhost podman[250865]: 2025-10-05 09:37:28.211042967 +0000 UTC m=+0.103777563 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:37:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:28 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:37:29 localhost systemd[1]: tmp-crun.cGIw9H.mount: Deactivated successfully. Oct 5 05:37:29 localhost podman[250887]: 2025-10-05 09:37:29.917862708 +0000 UTC m=+0.086995067 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:37:29 localhost podman[250887]: 2025-10-05 09:37:29.955033163 +0000 UTC m=+0.124165482 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 05:37:31 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:37:31 localhost systemd[1]: var-lib-containers-storage-overlay-e8d5660b8fd17c472ba639c36602afe3ef86a2b23ac8f1b2407f6d07d573e2fc-merged.mount: Deactivated successfully. Oct 5 05:37:31 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:37:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3932 DF PROTO=TCP SPT=35252 DPT=9102 SEQ=1874984828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76848760000000001030307) Oct 5 05:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a-merged.mount: Deactivated successfully. Oct 5 05:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-60afe3546a98a201263be776cccb4442ad15a631184295cbccd8c923b430a1f8-merged.mount: Deactivated successfully. Oct 5 05:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-60afe3546a98a201263be776cccb4442ad15a631184295cbccd8c923b430a1f8-merged.mount: Deactivated successfully. Oct 5 05:37:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:33 localhost podman[250906]: 2025-10-05 09:37:33.173504082 +0000 UTC m=+0.102208601 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:37:33 localhost podman[250906]: 2025-10-05 09:37:33.18342374 +0000 UTC m=+0.112128229 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:37:33 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:37:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=964 DF PROTO=TCP SPT=58366 DPT=9101 SEQ=3252836659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76850630000000001030307) Oct 5 05:37:34 localhost systemd[1]: tmp-crun.KAbovU.mount: Deactivated successfully. Oct 5 05:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.398 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.398 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.455 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.456 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.456 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.456 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.457 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a-merged.mount: Deactivated successfully. Oct 5 05:37:34 localhost nova_compute[238014]: 2025-10-05 09:37:34.853 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.068 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.069 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13149MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.069 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.070 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.133 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.133 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.150 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.627 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.634 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.655 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.657 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:37:35 localhost nova_compute[238014]: 2025-10-05 09:37:35.658 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.636 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.637 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.654 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.655 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.655 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.655 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.656 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:36 localhost nova_compute[238014]: 2025-10-05 09:37:36.656 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=966 DF PROTO=TCP SPT=58366 DPT=9101 SEQ=3252836659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7685C770000000001030307) Oct 5 05:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:37 localhost nova_compute[238014]: 2025-10-05 09:37:37.378 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-265ee1c6a66a7a26bd10096fe90cded0c1a62994fc36010106069b2755e4df7c-merged.mount: Deactivated successfully. Oct 5 05:37:40 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:37:40 localhost systemd[1]: var-lib-containers-storage-overlay-e8d5660b8fd17c472ba639c36602afe3ef86a2b23ac8f1b2407f6d07d573e2fc-merged.mount: Deactivated successfully. Oct 5 05:37:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:37:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=967 DF PROTO=TCP SPT=58366 DPT=9101 SEQ=3252836659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7686C360000000001030307) Oct 5 05:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-7387ebb91ae53af911fb3fe7ebf50b644c069b423a8881cafb6a1fa3f2b4168a-merged.mount: Deactivated successfully. Oct 5 05:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:42 localhost systemd[1]: session-58.scope: Deactivated successfully. Oct 5 05:37:42 localhost systemd[1]: session-58.scope: Consumed 59.559s CPU time. Oct 5 05:37:42 localhost systemd-logind[760]: Session 58 logged out. Waiting for processes to exit. Oct 5 05:37:42 localhost systemd-logind[760]: Removed session 58. Oct 5 05:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-60afe3546a98a201263be776cccb4442ad15a631184295cbccd8c923b430a1f8-merged.mount: Deactivated successfully. Oct 5 05:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:37:43 localhost podman[250968]: 2025-10-05 09:37:43.239990406 +0000 UTC m=+2.408690768 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:37:43 localhost podman[250968]: 2025-10-05 09:37:43.326183401 +0000 UTC m=+2.494883793 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:37:43 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:43 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:43 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:44 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:45 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:45 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:45 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:45 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:46 localhost podman[248157]: time="2025-10-05T09:37:46Z" level=error msg="Getting root fs size for \"7eba7f241e79aa3b308401b97ff79adfb18829bdc0e0cda88cbe8102568d8028\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy" Oct 5 05:37:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63602 DF PROTO=TCP SPT=56564 DPT=9105 SEQ=1532035181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768820B0000000001030307) Oct 5 05:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63603 DF PROTO=TCP SPT=56564 DPT=9105 SEQ=1532035181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76885F60000000001030307) Oct 5 05:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-265ee1c6a66a7a26bd10096fe90cded0c1a62994fc36010106069b2755e4df7c-merged.mount: Deactivated successfully. Oct 5 05:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843-merged.mount: Deactivated successfully. Oct 5 05:37:49 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:49 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63604 DF PROTO=TCP SPT=56564 DPT=9105 SEQ=1532035181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7688DF60000000001030307) Oct 5 05:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:37:49 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:49 localhost podman[250991]: 2025-10-05 09:37:49.916635813 +0000 UTC m=+0.083484506 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:37:49 localhost podman[250991]: 2025-10-05 09:37:49.96007548 +0000 UTC m=+0.126924223 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm) Oct 5 05:37:49 localhost podman[250991]: unhealthy Oct 5 05:37:49 localhost podman[250992]: 2025-10-05 09:37:49.971431385 +0000 UTC m=+0.135048274 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:37:50 localhost podman[250992]: 2025-10-05 09:37:50.00324387 +0000 UTC m=+0.166860749 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:37:50 localhost podman[250992]: unhealthy Oct 5 05:37:51 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:51 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:37:51 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:37:51 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:37:51 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:37:51 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:37:51 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:37:52 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:37:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63605 DF PROTO=TCP SPT=56564 DPT=9105 SEQ=1532035181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7689DB60000000001030307) Oct 5 05:37:54 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:37:54 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:54 localhost podman[251099]: 2025-10-05 09:37:54.675419476 +0000 UTC m=+0.085283454 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter) Oct 5 05:37:54 localhost podman[251099]: 2025-10-05 09:37:54.690533358 +0000 UTC m=+0.100397366 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:37:54 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:37:55 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:55 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:55 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:55 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:37:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:37:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65278 DF PROTO=TCP SPT=56178 DPT=9882 SEQ=208958187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768A64C0000000001030307) Oct 5 05:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:37:56 localhost podman[251137]: 2025-10-05 09:37:56.924590708 +0000 UTC m=+0.090317498 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Oct 5 05:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:37:56 localhost systemd[1]: var-lib-containers-storage-overlay-7a737dba04724aec001e9e6bcf76377258454853b5287a5bc8d87a57a3463c09-merged.mount: Deactivated successfully. Oct 5 05:37:56 localhost podman[251137]: 2025-10-05 09:37:56.953926156 +0000 UTC m=+0.119652916 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:37:57 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:57 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:37:57 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:37:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62206 DF PROTO=TCP SPT=60912 DPT=9100 SEQ=3341889120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768AE760000000001030307) Oct 5 05:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:37:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843-merged.mount: Deactivated successfully. Oct 5 05:37:58 localhost systemd[1]: var-lib-containers-storage-overlay-d24750467c39fd6809397d716059e732daab79fc2140f5251d9b92d57cbd6843-merged.mount: Deactivated successfully. Oct 5 05:37:58 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:37:58 localhost podman[251155]: 2025-10-05 09:37:58.849875818 +0000 UTC m=+0.244712589 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:37:58 localhost podman[251155]: 2025-10-05 09:37:58.857272999 +0000 UTC m=+0.252109790 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Oct 5 05:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:59 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:37:59 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Oct 5 05:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:38:01 localhost podman[251176]: 2025-10-05 09:38:01.356280257 +0000 UTC m=+0.073358246 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:38:01 localhost podman[251176]: 2025-10-05 09:38:01.390286252 +0000 UTC m=+0.107364221 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-d2231e879ead43b6a2e73a2aad2fe770af49563937e9adad8ccf7c304d6ac6ec-merged.mount: Deactivated successfully. Oct 5 05:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:38:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3832 DF PROTO=TCP SPT=53660 DPT=9102 SEQ=3763278456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768BD760000000001030307) Oct 5 05:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:01 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:38:02 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:38:02 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:02 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:38:02 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:02 localhost podman[248157]: time="2025-10-05T09:38:02Z" level=error msg="Getting root fs size for \"93b6f834ee1144b5a827a28bdfd4b3359026eaefec4624a56e8ab24d4e05ffc0\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy" Oct 5 05:38:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-f55b66b4cc27e216ee661f88e3740f080132b0ec881f50e70b03e2853c0d8b80-merged.mount: Deactivated successfully. Oct 5 05:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-7a737dba04724aec001e9e6bcf76377258454853b5287a5bc8d87a57a3463c09-merged.mount: Deactivated successfully. Oct 5 05:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:38:03 localhost systemd[1]: var-lib-containers-storage-overlay-7a737dba04724aec001e9e6bcf76377258454853b5287a5bc8d87a57a3463c09-merged.mount: Deactivated successfully. Oct 5 05:38:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49317 DF PROTO=TCP SPT=49124 DPT=9101 SEQ=3100751050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768C5930000000001030307) Oct 5 05:38:04 localhost podman[251195]: 2025-10-05 09:38:04.060533829 +0000 UTC m=+0.105503671 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:38:04 localhost podman[251195]: 2025-10-05 09:38:04.095605794 +0000 UTC m=+0.140575596 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd) Oct 5 05:38:04 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:04 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:38:04 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:05 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:06 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:38:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:06 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:07 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49319 DF PROTO=TCP SPT=49124 DPT=9101 SEQ=3100751050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768D1B60000000001030307) Oct 5 05:38:07 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:07 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-8035b846284d335d9393ab62c801f2456eb25851b24c50a7b13196117676086c-merged.mount: Deactivated successfully. Oct 5 05:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-d2231e879ead43b6a2e73a2aad2fe770af49563937e9adad8ccf7c304d6ac6ec-merged.mount: Deactivated successfully. Oct 5 05:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:38:08 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:38:09 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:09 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:09 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:09 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:10 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully. Oct 5 05:38:10 localhost systemd[1]: var-lib-containers-storage-overlay-f55b66b4cc27e216ee661f88e3740f080132b0ec881f50e70b03e2853c0d8b80-merged.mount: Deactivated successfully. Oct 5 05:38:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49320 DF PROTO=TCP SPT=49124 DPT=9101 SEQ=3100751050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768E1770000000001030307) Oct 5 05:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:12 localhost systemd[1]: var-lib-containers-storage-overlay-94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af-merged.mount: Deactivated successfully. Oct 5 05:38:13 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:14 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:14 localhost podman[251212]: 2025-10-05 09:38:14.701822838 +0000 UTC m=+0.117199870 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:38:14 localhost podman[251212]: 2025-10-05 09:38:14.733051827 +0000 UTC m=+0.148428819 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Oct 5 05:38:15 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:38:15 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12186 DF PROTO=TCP SPT=50878 DPT=9105 SEQ=2639095753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768F73B0000000001030307) Oct 5 05:38:17 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:17 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:17 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:17 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12187 DF PROTO=TCP SPT=50878 DPT=9105 SEQ=2639095753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC768FB370000000001030307) Oct 5 05:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:18 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:19 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:19 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12188 DF PROTO=TCP SPT=50878 DPT=9105 SEQ=2639095753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76903360000000001030307) Oct 5 05:38:20 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:38:20.374 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:38:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:38:20.375 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:38:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:38:20.375 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:38:21 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:38:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:38:21 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:21 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:21 localhost systemd[1]: var-lib-containers-storage-overlay-94a7534dc9bd34032767b158679e817adad3ea18f3ee5b9e6de5345a37dc77af-merged.mount: Deactivated successfully. Oct 5 05:38:21 localhost podman[251237]: 2025-10-05 09:38:21.915428632 +0000 UTC m=+0.086915685 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:38:21 localhost podman[251237]: 2025-10-05 09:38:21.947178676 +0000 UTC m=+0.118665709 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:38:21 localhost podman[251237]: unhealthy Oct 5 05:38:21 localhost podman[251238]: 2025-10-05 09:38:21.963150821 +0000 UTC m=+0.131639473 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:38:21 localhost podman[251238]: 2025-10-05 09:38:21.994470083 +0000 UTC m=+0.162958715 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:38:21 localhost podman[251238]: unhealthy Oct 5 05:38:22 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:38:22 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:38:22 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:38:22 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:38:22 localhost systemd[1]: tmp-crun.hAJbRx.mount: Deactivated successfully. Oct 5 05:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12189 DF PROTO=TCP SPT=50878 DPT=9105 SEQ=2639095753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76912F60000000001030307) Oct 5 05:38:23 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:24 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:25 localhost systemd[1]: var-lib-containers-storage-overlay-f479750c98f4a67ffae355a1e79b3c9a76d56699a79b842b4363e69f089cca49-merged.mount: Deactivated successfully. Oct 5 05:38:25 localhost systemd[1]: var-lib-containers-storage-overlay-92c9c6b2f01f047207aca223ed13c75d75c3b5dfe8b2b9d0938721ee5dd381ac-merged.mount: Deactivated successfully. Oct 5 05:38:25 localhost systemd[1]: var-lib-containers-storage-overlay-92c9c6b2f01f047207aca223ed13c75d75c3b5dfe8b2b9d0938721ee5dd381ac-merged.mount: Deactivated successfully. Oct 5 05:38:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:38:25 localhost systemd[1]: tmp-crun.d32E4Y.mount: Deactivated successfully. Oct 5 05:38:25 localhost podman[251278]: 2025-10-05 09:38:25.916555117 +0000 UTC m=+0.085094856 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6) Oct 5 05:38:25 localhost podman[251278]: 2025-10-05 09:38:25.927461394 +0000 UTC m=+0.096001113 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Oct 5 05:38:25 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:38:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49728 DF PROTO=TCP SPT=55004 DPT=9100 SEQ=928816126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7691B770000000001030307) Oct 5 05:38:26 localhost systemd[1]: var-lib-containers-storage-overlay-a88431d359b42496c7ed4ff6b33f06da63b22b9645d8b9affaed743b1113f6ea-merged.mount: Deactivated successfully. Oct 5 05:38:26 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:38:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49729 DF PROTO=TCP SPT=55004 DPT=9100 SEQ=928816126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76923770000000001030307) Oct 5 05:38:28 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:28 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:38:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:38:29 localhost podman[251298]: 2025-10-05 09:38:29.183402975 +0000 UTC m=+0.093997268 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:38:29 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:29 localhost podman[251298]: 2025-10-05 09:38:29.195230457 +0000 UTC m=+0.105824780 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:30 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:31 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:38:31 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:31 localhost podman[251314]: 2025-10-05 09:38:31.045233679 +0000 UTC m=+0.622484097 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:38:31 localhost podman[251314]: 2025-10-05 09:38:31.077690951 +0000 UTC m=+0.654941339 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:38:31 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34847 DF PROTO=TCP SPT=52674 DPT=9102 SEQ=3624610367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76932B60000000001030307) Oct 5 05:38:32 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:32 localhost nova_compute[238014]: 2025-10-05 09:38:32.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:32 localhost nova_compute[238014]: 2025-10-05 09:38:32.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:38:32 localhost nova_compute[238014]: 2025-10-05 09:38:32.398 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:38:32 localhost nova_compute[238014]: 2025-10-05 09:38:32.400 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:32 localhost nova_compute[238014]: 2025-10-05 09:38:32.400 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:38:32 localhost nova_compute[238014]: 2025-10-05 09:38:32.415 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:32 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:38:32 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:32 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:32 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:38:32 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:33 localhost podman[251337]: 2025-10-05 09:38:33.009020396 +0000 UTC m=+0.356410888 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:38:33 localhost podman[251337]: 2025-10-05 09:38:33.043545755 +0000 UTC m=+0.390936267 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 05:38:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:33 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:33 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2075 DF PROTO=TCP SPT=60768 DPT=9101 SEQ=3327508070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7693AC20000000001030307) Oct 5 05:38:34 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:34 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:38:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.428 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.451 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.451 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.451 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.452 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.452 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:38:34 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:34 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:34 localhost nova_compute[238014]: 2025-10-05 09:38:34.850 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:38:34 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.041 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.043 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13109MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.043 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.044 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.336 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.337 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.354 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.615 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.615 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.637 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.665 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:38:35 localhost nova_compute[238014]: 2025-10-05 09:38:35.687 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:38:36 localhost nova_compute[238014]: 2025-10-05 09:38:36.160 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:38:36 localhost nova_compute[238014]: 2025-10-05 09:38:36.167 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:38:36 localhost nova_compute[238014]: 2025-10-05 09:38:36.187 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:38:36 localhost nova_compute[238014]: 2025-10-05 09:38:36.189 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:38:36 localhost nova_compute[238014]: 2025-10-05 09:38:36.189 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:38:36 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:38:36 localhost podman[251399]: 2025-10-05 09:38:36.725037653 +0000 UTC m=+0.071081414 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd) Oct 5 05:38:36 localhost podman[251399]: 2025-10-05 09:38:36.731390167 +0000 UTC m=+0.077433928 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:38:36 localhost systemd[1]: var-lib-containers-storage-overlay-5669f7c33f2cbae2c100aead59c5bc55d637c9fe9224f3ab6a48af0ed1c37483-merged.mount: Deactivated successfully. Oct 5 05:38:36 localhost systemd[1]: var-lib-containers-storage-overlay-5669f7c33f2cbae2c100aead59c5bc55d637c9fe9224f3ab6a48af0ed1c37483-merged.mount: Deactivated successfully. Oct 5 05:38:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2077 DF PROTO=TCP SPT=60768 DPT=9101 SEQ=3327508070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76946B60000000001030307) Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.134 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.134 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.135 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.135 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.150 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.151 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.152 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:37 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:38:37 localhost nova_compute[238014]: 2025-10-05 09:38:37.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:37 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:38:37 localhost systemd[1]: var-lib-containers-storage-overlay-a88431d359b42496c7ed4ff6b33f06da63b22b9645d8b9affaed743b1113f6ea-merged.mount: Deactivated successfully. Oct 5 05:38:38 localhost nova_compute[238014]: 2025-10-05 09:38:38.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:38 localhost nova_compute[238014]: 2025-10-05 09:38:38.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:38 localhost nova_compute[238014]: 2025-10-05 09:38:38.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:38:38 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:38:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:38:38 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:38:38 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:38:39 localhost nova_compute[238014]: 2025-10-05 09:38:39.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:38:39 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:39 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:40 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:38:40 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:38:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2078 DF PROTO=TCP SPT=60768 DPT=9101 SEQ=3327508070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76956760000000001030307) Oct 5 05:38:41 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:41 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:41 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:41 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:41 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:42 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:42 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:42 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:43 localhost systemd[1]: var-lib-containers-storage-overlay-bdd5d7f208e627ed078801541a11c92d30dfbffb1c7200a7e88292fbfc56b82d-merged.mount: Deactivated successfully. Oct 5 05:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:38:45 localhost podman[251416]: 2025-10-05 09:38:45.908679374 +0000 UTC m=+0.077366055 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:38:45 localhost podman[251416]: 2025-10-05 09:38:45.985547466 +0000 UTC m=+0.154234137 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:38:46 localhost systemd[1]: var-lib-containers-storage-overlay-5669f7c33f2cbae2c100aead59c5bc55d637c9fe9224f3ab6a48af0ed1c37483-merged.mount: Deactivated successfully. Oct 5 05:38:46 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:46 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:46 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:38:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41374 DF PROTO=TCP SPT=51338 DPT=9105 SEQ=1777982197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7696C6B0000000001030307) Oct 5 05:38:46 localhost systemd[1]: var-lib-containers-storage-overlay-5669f7c33f2cbae2c100aead59c5bc55d637c9fe9224f3ab6a48af0ed1c37483-merged.mount: Deactivated successfully. Oct 5 05:38:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41375 DF PROTO=TCP SPT=51338 DPT=9105 SEQ=1777982197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76970760000000001030307) Oct 5 05:38:48 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:38:48 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:38:48 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:38:48 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:48 localhost podman[248157]: time="2025-10-05T09:38:48Z" level=error msg="Getting root fs size for \"adfadc49f97d8bdec4a216581fd8d3e5de52dd8f84d33687875cfcf022d81956\": getting diffsize of layer \"948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy" Oct 5 05:38:49 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:49 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:49 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:38:49 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:38:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41376 DF PROTO=TCP SPT=51338 DPT=9105 SEQ=1777982197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76978770000000001030307) Oct 5 05:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-5030a18da58589cb9376f09d127cf9b62366340dd5dbd67fa5abee2369265346-merged.mount: Deactivated successfully. Oct 5 05:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:50 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:51 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:38:51 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:51 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:38:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:38:52 localhost podman[251442]: 2025-10-05 09:38:52.924123598 +0000 UTC m=+0.089727232 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:38:52 localhost podman[251442]: 2025-10-05 09:38:52.934281094 +0000 UTC m=+0.099884688 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:38:52 localhost podman[251442]: unhealthy Oct 5 05:38:52 localhost podman[251441]: 2025-10-05 09:38:52.889551007 +0000 UTC m=+0.061661049 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 05:38:53 localhost podman[251441]: 2025-10-05 09:38:53.022079983 +0000 UTC m=+0.194190015 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 05:38:53 localhost podman[251441]: unhealthy Oct 5 05:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:38:53 localhost systemd[1]: var-lib-containers-storage-overlay-beb1941435aa71e3442bb0ecaccd1897b68b01e215767a88dee6f86d4122e113-merged.mount: Deactivated successfully. Oct 5 05:38:53 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:38:53 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:38:53 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:38:53 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:38:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41377 DF PROTO=TCP SPT=51338 DPT=9105 SEQ=1777982197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76988370000000001030307) Oct 5 05:38:54 localhost systemd[1]: var-lib-containers-storage-overlay-bdd5d7f208e627ed078801541a11c92d30dfbffb1c7200a7e88292fbfc56b82d-merged.mount: Deactivated successfully. Oct 5 05:38:55 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:55 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:55 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:55 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:38:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17375 DF PROTO=TCP SPT=45256 DPT=9882 SEQ=204857878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76990AC0000000001030307) Oct 5 05:38:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:38:56 localhost podman[251529]: 2025-10-05 09:38:56.411011181 +0000 UTC m=+0.122584495 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7) Oct 5 05:38:56 localhost podman[251529]: 2025-10-05 09:38:56.420069389 +0000 UTC m=+0.131642673 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:38:56 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:57 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43334 DF PROTO=TCP SPT=59096 DPT=9100 SEQ=3918598435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76998B70000000001030307) Oct 5 05:38:58 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:38:58 localhost podman[248157]: time="2025-10-05T09:38:58Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged: invalid argument" Oct 5 05:38:58 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:58 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:38:58 localhost podman[248157]: time="2025-10-05T09:38:58Z" level=error msg="Getting root fs size for \"b7d07d38958eefe8f9e843dda0dc613c0081ae3fd6a6b6f5294b6717082af246\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": creating overlay mount to /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/D6YZM4LA5UEJ7WCLAMSK4FINTQ:/var/lib/containers/storage/overlay/l/IDN6N356LYE3OBITDBQNLPW6JL,upperdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/diff,workdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/work,nodev,metacopy=on\": no such file or directory" Oct 5 05:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:38:58 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:38:59 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:39:00 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:00 localhost systemd[1]: var-lib-containers-storage-overlay-32f9080afca125bdea732b66d70a39fe7d55069eaac1a486e6086cede937e213-merged.mount: Deactivated successfully. Oct 5 05:39:01 localhost systemd[1]: var-lib-containers-storage-overlay-ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938-merged.mount: Deactivated successfully. Oct 5 05:39:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:39:01 localhost systemd[1]: var-lib-containers-storage-overlay-750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d-merged.mount: Deactivated successfully. Oct 5 05:39:01 localhost systemd[1]: tmp-crun.uj0TzW.mount: Deactivated successfully. Oct 5 05:39:01 localhost podman[251641]: 2025-10-05 09:39:01.790891658 +0000 UTC m=+0.061809503 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:39:01 localhost podman[251641]: 2025-10-05 09:39:01.82590858 +0000 UTC m=+0.096826505 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:39:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=143 DF PROTO=TCP SPT=59570 DPT=9102 SEQ=1667826769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769A7F70000000001030307) Oct 5 05:39:02 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:02 localhost systemd[1]: var-lib-containers-storage-overlay-ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938-merged.mount: Deactivated successfully. Oct 5 05:39:02 localhost systemd[1]: var-lib-containers-storage-overlay-ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938-merged.mount: Deactivated successfully. Oct 5 05:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-5030a18da58589cb9376f09d127cf9b62366340dd5dbd67fa5abee2369265346-merged.mount: Deactivated successfully. Oct 5 05:39:03 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:39:03 localhost podman[251659]: 2025-10-05 09:39:03.358691182 +0000 UTC m=+0.356133871 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:39:03 localhost podman[251659]: 2025-10-05 09:39:03.374257875 +0000 UTC m=+0.371700614 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:39:03 localhost systemd[1]: var-lib-containers-storage-overlay-5030a18da58589cb9376f09d127cf9b62366340dd5dbd67fa5abee2369265346-merged.mount: Deactivated successfully. Oct 5 05:39:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58332 DF PROTO=TCP SPT=58724 DPT=9101 SEQ=2309925058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769AFF30000000001030307) Oct 5 05:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:39:04 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:39:05 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:39:05 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:39:05 localhost podman[251681]: 2025-10-05 09:39:05.533296474 +0000 UTC m=+1.463298312 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:39:05 localhost podman[251681]: 2025-10-05 09:39:05.546099382 +0000 UTC m=+1.476101230 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 05:39:06 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:39:06 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:39:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58334 DF PROTO=TCP SPT=58724 DPT=9101 SEQ=2309925058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769BBF60000000001030307) Oct 5 05:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:39:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:39:07 localhost systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully. Oct 5 05:39:07 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:39:07 localhost podman[251701]: 2025-10-05 09:39:07.796690372 +0000 UTC m=+0.258375580 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:39:07 localhost podman[251701]: 2025-10-05 09:39:07.811078733 +0000 UTC m=+0.272763921 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:39:08 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:39:09 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:09 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:39:09 localhost systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully. Oct 5 05:39:09 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:10 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58335 DF PROTO=TCP SPT=58724 DPT=9101 SEQ=2309925058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769CBB60000000001030307) Oct 5 05:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d-merged.mount: Deactivated successfully. Oct 5 05:39:11 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:12 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:13 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:13 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:39:13 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully. Oct 5 05:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-32f9080afca125bdea732b66d70a39fe7d55069eaac1a486e6086cede937e213-merged.mount: Deactivated successfully. Oct 5 05:39:14 localhost systemd[1]: var-lib-containers-storage-overlay-32f9080afca125bdea732b66d70a39fe7d55069eaac1a486e6086cede937e213-merged.mount: Deactivated successfully. Oct 5 05:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938-merged.mount: Deactivated successfully. Oct 5 05:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:15 localhost systemd[1]: var-lib-containers-storage-overlay-750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d-merged.mount: Deactivated successfully. Oct 5 05:39:16 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:39:16 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43279 DF PROTO=TCP SPT=38362 DPT=9105 SEQ=2496895715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769E19B0000000001030307) Oct 5 05:39:16 localhost podman[251721]: 2025-10-05 09:39:16.764309986 +0000 UTC m=+0.098841170 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller) Oct 5 05:39:16 localhost podman[251721]: 2025-10-05 09:39:16.802925597 +0000 UTC m=+0.137456811 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:39:17 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:39:17 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:17 localhost systemd[1]: var-lib-containers-storage-overlay-ee06ff9b297b077dce5c039f42b6c19c94978847093570b7b6066a30f5615938-merged.mount: Deactivated successfully. Oct 5 05:39:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43280 DF PROTO=TCP SPT=38362 DPT=9105 SEQ=2496895715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769E5B60000000001030307) Oct 5 05:39:17 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:39:18 localhost systemd[1]: var-lib-containers-storage-overlay-dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a-merged.mount: Deactivated successfully. Oct 5 05:39:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43281 DF PROTO=TCP SPT=38362 DPT=9105 SEQ=2496895715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769EDB60000000001030307) Oct 5 05:39:19 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:39:20 localhost systemd[1]: var-lib-containers-storage-overlay-750273294f7ba0ffeaf17c632cdda1a5fbbb0fc1490e1e8d52d534c991add83d-merged.mount: Deactivated successfully. Oct 5 05:39:20 localhost systemd[1]: var-lib-containers-storage-overlay-14165343956b68f6adce0a282bc9a68a91e1d66b2adbe87d958d61d99ad6d3d8-merged.mount: Deactivated successfully. Oct 5 05:39:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:39:20.375 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:39:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:39:20.375 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:39:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:39:20.375 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 5 05:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:39:21 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 5 05:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:22 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:23 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:39:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:39:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:39:23 localhost podman[251743]: 2025-10-05 09:39:23.8500347 +0000 UTC m=+0.080833260 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:39:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43282 DF PROTO=TCP SPT=38362 DPT=9105 SEQ=2496895715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC769FD760000000001030307) Oct 5 05:39:23 localhost podman[251743]: 2025-10-05 09:39:23.884444577 +0000 UTC m=+0.115243167 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:39:23 localhost podman[251743]: unhealthy Oct 5 05:39:23 localhost podman[251744]: 2025-10-05 09:39:23.907484194 +0000 UTC m=+0.134390467 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:39:23 localhost podman[251744]: 2025-10-05 09:39:23.94521727 +0000 UTC m=+0.172123493 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:39:23 localhost podman[251744]: unhealthy Oct 5 05:39:23 localhost systemd[1]: var-lib-containers-storage-overlay-49720dbe515448afa07243eb8af1d9da9501d8bf4fb266e194f65378a3f3db49-merged.mount: Deactivated successfully. Oct 5 05:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:24 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:25 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:39:25 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:39:25 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:39:25 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:39:25 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:39:25 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:39:25 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:39:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61825 DF PROTO=TCP SPT=58002 DPT=9882 SEQ=3873968336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A05DC0000000001030307) Oct 5 05:39:26 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:26 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:27 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:27 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:27 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:27 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:39:27 localhost sshd[251779]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:39:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42703 DF PROTO=TCP SPT=39876 DPT=9100 SEQ=3541366024 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A0DF70000000001030307) Oct 5 05:39:28 localhost systemd-logind[760]: New session 59 of user zuul. Oct 5 05:39:28 localhost systemd[1]: Started Session 59 of User zuul. Oct 5 05:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:39:28 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:39:28 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:28 localhost systemd[1]: tmp-crun.RQ6Ndy.mount: Deactivated successfully. Oct 5 05:39:28 localhost podman[251854]: 2025-10-05 09:39:28.678298049 +0000 UTC m=+0.100123865 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Oct 5 05:39:28 localhost podman[251854]: 2025-10-05 09:39:28.688500036 +0000 UTC m=+0.110325842 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Oct 5 05:39:28 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:29 localhost python3.9[251894]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman Oct 5 05:39:30 localhost systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully. Oct 5 05:39:30 localhost systemd[1]: var-lib-containers-storage-overlay-14165343956b68f6adce0a282bc9a68a91e1d66b2adbe87d958d61d99ad6d3d8-merged.mount: Deactivated successfully. Oct 5 05:39:30 localhost systemd[1]: var-lib-containers-storage-overlay-14165343956b68f6adce0a282bc9a68a91e1d66b2adbe87d958d61d99ad6d3d8-merged.mount: Deactivated successfully. Oct 5 05:39:30 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully. Oct 5 05:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:39:31 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:39:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13698 DF PROTO=TCP SPT=35298 DPT=9102 SEQ=1699551387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A1D360000000001030307) Oct 5 05:39:32 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:32 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:32 localhost python3.9[252016]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:39:33 localhost systemd[1]: Started libpod-conmon-70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.scope. Oct 5 05:39:33 localhost podman[252017]: 2025-10-05 09:39:33.081960036 +0000 UTC m=+0.136867605 container exec 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:39:33 localhost podman[252017]: 2025-10-05 09:39:33.112141897 +0000 UTC m=+0.167049466 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:39:33 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:39:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23932 DF PROTO=TCP SPT=47830 DPT=9101 SEQ=2192048486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A25230000000001030307) Oct 5 05:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:39:34 localhost podman[252047]: 2025-10-05 09:39:34.358941567 +0000 UTC m=+0.521732246 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:34 localhost podman[252047]: 2025-10-05 09:39:34.392341276 +0000 UTC m=+0.555131905 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.405 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.405 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.405 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.405 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.405 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:39:34 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 5 05:39:34 localhost nova_compute[238014]: 2025-10-05 09:39:34.853 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:39:35 localhost python3.9[252194]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.073 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.075 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12991MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.076 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.177 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.178 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.195 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:39:35 localhost systemd[1]: libpod-conmon-70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.scope: Deactivated successfully. Oct 5 05:39:35 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully. Oct 5 05:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully. Oct 5 05:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-49720dbe515448afa07243eb8af1d9da9501d8bf4fb266e194f65378a3f3db49-merged.mount: Deactivated successfully. Oct 5 05:39:35 localhost systemd[1]: Started libpod-conmon-70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.scope. Oct 5 05:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully. Oct 5 05:39:35 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 5 05:39:35 localhost podman[252197]: 2025-10-05 09:39:35.382033462 +0000 UTC m=+0.350763054 container exec 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:39:35 localhost podman[252197]: 2025-10-05 09:39:35.412539822 +0000 UTC m=+0.381269374 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.695 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.701 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.723 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.725 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:39:35 localhost nova_compute[238014]: 2025-10-05 09:39:35.725 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:39:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:39:36 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 5 05:39:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23934 DF PROTO=TCP SPT=47830 DPT=9101 SEQ=2192048486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A31370000000001030307) Oct 5 05:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:39:37 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:39:37 localhost podman[252249]: 2025-10-05 09:39:37.603598242 +0000 UTC m=+1.771890628 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:39:37 localhost podman[252249]: 2025-10-05 09:39:37.6109119 +0000 UTC m=+1.779204296 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.722 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.722 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.722 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.723 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.737 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.738 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:37 localhost nova_compute[238014]: 2025-10-05 09:39:37.738 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:39:38 localhost python3.9[252392]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:39:38 localhost nova_compute[238014]: 2025-10-05 09:39:38.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:38 localhost nova_compute[238014]: 2025-10-05 09:39:38.397 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:38 localhost nova_compute[238014]: 2025-10-05 09:39:38.397 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:38 localhost nova_compute[238014]: 2025-10-05 09:39:38.397 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:39:38 localhost python3.9[252502]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman Oct 5 05:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:38 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:39:39 localhost systemd[1]: var-lib-containers-storage-overlay-51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98-merged.mount: Deactivated successfully. Oct 5 05:39:39 localhost nova_compute[238014]: 2025-10-05 09:39:39.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:39:39 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:40 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:39:40 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:39:40 localhost systemd[1]: libpod-conmon-70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.scope: Deactivated successfully. Oct 5 05:39:40 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:39:40 localhost podman[252348]: 2025-10-05 09:39:40.384716985 +0000 UTC m=+2.535202464 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:39:40 localhost podman[252348]: 2025-10-05 09:39:40.39705062 +0000 UTC m=+2.547536099 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 5 05:39:40 localhost podman[252514]: 2025-10-05 09:39:40.444723947 +0000 UTC m=+0.611213269 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 05:39:40 localhost podman[252514]: 2025-10-05 09:39:40.481737694 +0000 UTC m=+0.648226976 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2-merged.mount: Deactivated successfully. Oct 5 05:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23935 DF PROTO=TCP SPT=47830 DPT=9101 SEQ=2192048486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A40F60000000001030307) Oct 5 05:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39-merged.mount: Deactivated successfully. Oct 5 05:39:41 localhost nova_compute[238014]: 2025-10-05 09:39:41.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:39:41 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:42 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:42 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:39:42 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:43 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:44 localhost python3.9[252651]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:44 localhost systemd[1]: Started libpod-conmon-2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.scope. Oct 5 05:39:44 localhost podman[252652]: 2025-10-05 09:39:44.326989489 +0000 UTC m=+0.094499242 container exec 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 05:39:44 localhost podman[252652]: 2025-10-05 09:39:44.36122074 +0000 UTC m=+0.128730453 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:39:44 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7-merged.mount: Deactivated successfully. Oct 5 05:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 5 05:39:45 localhost systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully. Oct 5 05:39:46 localhost python3.9[252789]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:39:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50983 DF PROTO=TCP SPT=54102 DPT=9105 SEQ=4123970866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A56CA0000000001030307) Oct 5 05:39:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50984 DF PROTO=TCP SPT=54102 DPT=9105 SEQ=4123970866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A5AB70000000001030307) Oct 5 05:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:39:47 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a-merged.mount: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: libpod-conmon-2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.scope: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: Started libpod-conmon-2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.scope. Oct 5 05:39:48 localhost podman[252790]: 2025-10-05 09:39:48.209889517 +0000 UTC m=+1.786541206 container exec 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 05:39:48 localhost podman[252801]: 2025-10-05 09:39:48.245609859 +0000 UTC m=+0.405274837 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:39:48 localhost podman[252801]: 2025-10-05 09:39:48.293188754 +0000 UTC m=+0.452853732 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:39:48 localhost podman[252820]: 2025-10-05 09:39:48.314953055 +0000 UTC m=+0.093486374 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:39:48 localhost podman[252790]: 2025-10-05 09:39:48.319595992 +0000 UTC m=+1.896247681 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:39:48 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: libpod-conmon-2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.scope: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully. Oct 5 05:39:48 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully. Oct 5 05:39:48 localhost python3.9[252953]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:39:49 localhost python3.9[253063]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman Oct 5 05:39:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50985 DF PROTO=TCP SPT=54102 DPT=9105 SEQ=4123970866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A62B60000000001030307) Oct 5 05:39:49 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 5 05:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 5 05:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:50 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:39:52 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:39:53 localhost python3.9[253185]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:39:53 localhost systemd[1]: Started libpod-conmon-289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.scope. Oct 5 05:39:53 localhost podman[253186]: 2025-10-05 09:39:53.189966046 +0000 UTC m=+0.104983268 container exec 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid) Oct 5 05:39:53 localhost podman[253186]: 2025-10-05 09:39:53.223279963 +0000 UTC m=+0.138297215 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:39:53 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50986 DF PROTO=TCP SPT=54102 DPT=9105 SEQ=4123970866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A72760000000001030307) Oct 5 05:39:54 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:54 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:54 localhost systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully. Oct 5 05:39:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:55 localhost python3.9[253325]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:55 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:39:55 localhost systemd[1]: libpod-conmon-289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.scope: Deactivated successfully. Oct 5 05:39:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:55 localhost systemd[1]: Started libpod-conmon-289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.scope. Oct 5 05:39:55 localhost podman[253326]: 2025-10-05 09:39:55.678971812 +0000 UTC m=+0.236124045 container exec 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:39:55 localhost podman[253326]: 2025-10-05 09:39:55.709898723 +0000 UTC m=+0.267050926 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid) Oct 5 05:39:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5223 DF PROTO=TCP SPT=33196 DPT=9882 SEQ=3299092019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A7B0C0000000001030307) Oct 5 05:39:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:39:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:56 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:39:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Oct 5 05:39:56 localhost podman[253356]: 2025-10-05 09:39:56.439687368 +0000 UTC m=+0.352173762 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:39:56 localhost podman[253355]: 2025-10-05 09:39:56.479516362 +0000 UTC m=+0.392474769 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:39:56 localhost podman[253355]: 2025-10-05 09:39:56.487056467 +0000 UTC m=+0.400014854 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:39:56 localhost podman[253355]: unhealthy Oct 5 05:39:56 localhost podman[253356]: 2025-10-05 09:39:56.576095199 +0000 UTC m=+0.488581533 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:39:56 localhost podman[253356]: unhealthy Oct 5 05:39:56 localhost python3.9[253505]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:39:57 localhost python3.9[253615]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman Oct 5 05:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 5 05:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-8df95372cdfa3047b33cd0040d0663ba9895a7edf8e92f134854350b1276dcf4-merged.mount: Deactivated successfully. Oct 5 05:39:57 localhost podman[248157]: time="2025-10-05T09:39:57Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\"" Oct 5 05:39:57 localhost podman[248157]: @ - - [05/Oct/2025:09:35:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1" Oct 5 05:39:57 localhost systemd[1]: var-lib-containers-storage-overlay-8df95372cdfa3047b33cd0040d0663ba9895a7edf8e92f134854350b1276dcf4-merged.mount: Deactivated successfully. Oct 5 05:39:57 localhost systemd[1]: libpod-conmon-289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.scope: Deactivated successfully. Oct 5 05:39:57 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:39:57 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Failed with result 'exit-code'. Oct 5 05:39:57 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:39:57 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Failed with result 'exit-code'. Oct 5 05:39:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39658 DF PROTO=TCP SPT=55376 DPT=9100 SEQ=2773058750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A83370000000001030307) Oct 5 05:40:00 localhost systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully. Oct 5 05:40:00 localhost systemd[1]: var-lib-containers-storage-overlay-78aae97843639e0540fd3ff25daf88917fb3dc3798e04bf7c2b460ca17dd485a-merged.mount: Deactivated successfully. Oct 5 05:40:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:40:00 localhost podman[253664]: 2025-10-05 09:40:00.708664781 +0000 UTC m=+0.128022145 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Oct 5 05:40:00 localhost podman[253664]: 2025-10-05 09:40:00.72486187 +0000 UTC m=+0.144219284 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc.) Oct 5 05:40:00 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:40:01 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Oct 5 05:40:01 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Oct 5 05:40:01 localhost python3.9[253824]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:01 localhost systemd[1]: Started libpod-conmon-508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.scope. Oct 5 05:40:01 localhost podman[253825]: 2025-10-05 09:40:01.471886665 +0000 UTC m=+0.128294172 container exec 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd) Oct 5 05:40:01 localhost podman[253825]: 2025-10-05 09:40:01.510078814 +0000 UTC m=+0.166486281 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:40:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49328 DF PROTO=TCP SPT=49360 DPT=9102 SEQ=3282683155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A92360000000001030307) Oct 5 05:40:02 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:40:02 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 5 05:40:02 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 5 05:40:03 localhost python3.9[253965]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:40:03 localhost systemd[1]: var-lib-containers-storage-overlay-ee47c660ea26d21ce84215704612469c43166e04b223dbf8f0a2a895de34e216-merged.mount: Deactivated successfully. Oct 5 05:40:03 localhost systemd[1]: libpod-conmon-508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.scope: Deactivated successfully. Oct 5 05:40:03 localhost systemd[1]: Started libpod-conmon-508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.scope. Oct 5 05:40:03 localhost podman[253966]: 2025-10-05 09:40:03.880107883 +0000 UTC m=+0.704438416 container exec 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:40:03 localhost podman[253966]: 2025-10-05 09:40:03.915347281 +0000 UTC m=+0.739677844 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:40:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51595 DF PROTO=TCP SPT=48652 DPT=9101 SEQ=2754271893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76A9A530000000001030307) Oct 5 05:40:04 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:04 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:04 localhost systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully. Oct 5 05:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:40:05 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:05 localhost python3.9[254119]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:05 localhost systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully. Oct 5 05:40:05 localhost systemd[1]: libpod-conmon-508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.scope: Deactivated successfully. Oct 5 05:40:05 localhost podman[254116]: 2025-10-05 09:40:05.768321784 +0000 UTC m=+0.330700648 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Oct 5 05:40:05 localhost podman[254116]: 2025-10-05 09:40:05.780010802 +0000 UTC m=+0.342389686 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:40:06 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:40:06 localhost python3.9[254252]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman Oct 5 05:40:06 localhost systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully. Oct 5 05:40:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51597 DF PROTO=TCP SPT=48652 DPT=9101 SEQ=2754271893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76AA6760000000001030307) Oct 5 05:40:07 localhost systemd[1]: var-lib-containers-storage-overlay-f58f2b4f8f560729736f5941b846f416eb5c90f8a03f52e63e224ade26f2e564-merged.mount: Deactivated successfully. Oct 5 05:40:07 localhost systemd[1]: var-lib-containers-storage-overlay-8df95372cdfa3047b33cd0040d0663ba9895a7edf8e92f134854350b1276dcf4-merged.mount: Deactivated successfully. Oct 5 05:40:07 localhost podman[248157]: @ - - [05/Oct/2025:09:35:07 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 130412 "" "Go-http-client/1.1" Oct 5 05:40:07 localhost podman_exporter[248402]: ts=2025-10-05T09:40:07.586Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Oct 5 05:40:07 localhost podman_exporter[248402]: ts=2025-10-05T09:40:07.587Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Oct 5 05:40:07 localhost podman_exporter[248402]: ts=2025-10-05T09:40:07.587Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Oct 5 05:40:07 localhost systemd[1]: var-lib-containers-storage-overlay-8df95372cdfa3047b33cd0040d0663ba9895a7edf8e92f134854350b1276dcf4-merged.mount: Deactivated successfully. Oct 5 05:40:08 localhost python3.9[254376]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:08 localhost systemd[1]: Started libpod-conmon-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.scope. Oct 5 05:40:08 localhost podman[254377]: 2025-10-05 09:40:08.663395852 +0000 UTC m=+0.101317154 container exec b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:40:08 localhost podman[254377]: 2025-10-05 09:40:08.692670086 +0000 UTC m=+0.130591348 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm) Oct 5 05:40:08 localhost systemd[1]: libpod-conmon-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.scope: Deactivated successfully. Oct 5 05:40:09 localhost python3.9[254514]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:09 localhost systemd[1]: Started libpod-conmon-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.scope. Oct 5 05:40:09 localhost podman[254515]: 2025-10-05 09:40:09.460542005 +0000 UTC m=+0.105681944 container exec b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 05:40:09 localhost podman[254515]: 2025-10-05 09:40:09.489723796 +0000 UTC m=+0.134863735 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 05:40:09 localhost systemd[1]: libpod-conmon-b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.scope: Deactivated successfully. Oct 5 05:40:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:40:10 localhost systemd[1]: tmp-crun.LrMGm1.mount: Deactivated successfully. Oct 5 05:40:10 localhost podman[254657]: 2025-10-05 09:40:10.700604801 +0000 UTC m=+0.100960654 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:40:10 localhost podman[254657]: 2025-10-05 09:40:10.713130915 +0000 UTC m=+0.113486758 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:40:10 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:40:10 localhost python3.9[254656]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51598 DF PROTO=TCP SPT=48652 DPT=9101 SEQ=2754271893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76AB6360000000001030307) Oct 5 05:40:11 localhost python3.9[254788]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman Oct 5 05:40:12 localhost python3.9[254911]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:12 localhost systemd[1]: Started libpod-conmon-ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.scope. Oct 5 05:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:40:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:40:12 localhost podman[254912]: 2025-10-05 09:40:12.434050637 +0000 UTC m=+0.113798886 container exec ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:40:12 localhost podman[254912]: 2025-10-05 09:40:12.464231626 +0000 UTC m=+0.143979885 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:40:12 localhost systemd[1]: libpod-conmon-ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.scope: Deactivated successfully. Oct 5 05:40:12 localhost podman[254929]: 2025-10-05 09:40:12.525504849 +0000 UTC m=+0.092148252 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Oct 5 05:40:12 localhost podman[254927]: 2025-10-05 09:40:12.58925874 +0000 UTC m=+0.157521828 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:40:12 localhost podman[254927]: 2025-10-05 09:40:12.599218653 +0000 UTC m=+0.167481601 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:40:12 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:40:12 localhost podman[254929]: 2025-10-05 09:40:12.617630889 +0000 UTC m=+0.184274252 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:40:12 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:40:13 localhost python3.9[255089]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:13 localhost systemd[1]: Started libpod-conmon-ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.scope. Oct 5 05:40:13 localhost podman[255090]: 2025-10-05 09:40:13.317369626 +0000 UTC m=+0.101611751 container exec ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:40:13 localhost podman[255090]: 2025-10-05 09:40:13.347707819 +0000 UTC m=+0.131949914 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:40:13 localhost systemd[1]: libpod-conmon-ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.scope: Deactivated successfully. Oct 5 05:40:13 localhost auditd[725]: Audit daemon rotating log files Oct 5 05:40:14 localhost python3.9[255231]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:14 localhost python3.9[255341]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman Oct 5 05:40:15 localhost python3.9[255464]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:15 localhost systemd[1]: Started libpod-conmon-ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.scope. Oct 5 05:40:15 localhost podman[255465]: 2025-10-05 09:40:15.713825171 +0000 UTC m=+0.110130655 container exec ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:40:15 localhost podman[255465]: 2025-10-05 09:40:15.746226051 +0000 UTC m=+0.142531475 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:40:15 localhost systemd[1]: libpod-conmon-ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.scope: Deactivated successfully. Oct 5 05:40:16 localhost python3.9[255604]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:16 localhost systemd[1]: Started libpod-conmon-ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.scope. Oct 5 05:40:16 localhost podman[255605]: 2025-10-05 09:40:16.585310015 +0000 UTC m=+0.117914519 container exec ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:40:16 localhost podman[255605]: 2025-10-05 09:40:16.614387343 +0000 UTC m=+0.146991837 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:40:16 localhost systemd[1]: libpod-conmon-ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.scope: Deactivated successfully. Oct 5 05:40:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62718 DF PROTO=TCP SPT=42172 DPT=9105 SEQ=1607067539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76ACBFB0000000001030307) Oct 5 05:40:17 localhost python3.9[255743]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62719 DF PROTO=TCP SPT=42172 DPT=9105 SEQ=1607067539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76ACFF60000000001030307) Oct 5 05:40:18 localhost python3.9[255853]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman Oct 5 05:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:40:18 localhost systemd[1]: tmp-crun.kjaRZ8.mount: Deactivated successfully. Oct 5 05:40:18 localhost podman[255977]: 2025-10-05 09:40:18.680540618 +0000 UTC m=+0.109029526 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:40:18 localhost podman[255977]: 2025-10-05 09:40:18.749190082 +0000 UTC m=+0.177678950 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:40:18 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:40:18 localhost python3.9[255976]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:18 localhost systemd[1]: Started libpod-conmon-9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.scope. Oct 5 05:40:18 localhost podman[256002]: 2025-10-05 09:40:18.893959569 +0000 UTC m=+0.106002023 container exec 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:40:18 localhost podman[256002]: 2025-10-05 09:40:18.928181518 +0000 UTC m=+0.140223952 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:40:18 localhost systemd[1]: libpod-conmon-9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.scope: Deactivated successfully. Oct 5 05:40:19 localhost python3.9[256142]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None Oct 5 05:40:19 localhost systemd[1]: Started libpod-conmon-9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.scope. Oct 5 05:40:19 localhost podman[256143]: 2025-10-05 09:40:19.691713178 +0000 UTC m=+0.095635437 container exec 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9) Oct 5 05:40:19 localhost podman[256143]: 2025-10-05 09:40:19.69909201 +0000 UTC m=+0.103014229 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:40:19 localhost systemd[1]: libpod-conmon-9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.scope: Deactivated successfully. Oct 5 05:40:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62720 DF PROTO=TCP SPT=42172 DPT=9105 SEQ=1607067539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76AD7F60000000001030307) Oct 5 05:40:20 localhost python3.9[256283]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:40:20.376 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:40:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:40:20.377 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:40:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:40:20.377 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:40:21 localhost python3.9[256393]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:22 localhost python3.9[256503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:23 localhost python3.9[256591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759657222.1379507-3167-67259749066919/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62721 DF PROTO=TCP SPT=42172 DPT=9105 SEQ=1607067539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76AE7B60000000001030307) Oct 5 05:40:24 localhost python3.9[256701]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:25 localhost python3.9[256811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8658 DF PROTO=TCP SPT=41176 DPT=9100 SEQ=1293260929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76AF0360000000001030307) Oct 5 05:40:26 localhost python3.9[256868]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:26 localhost python3.9[256978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:27 localhost python3.9[257035]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hqsnqpfn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:27 localhost python3.9[257145]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8659 DF PROTO=TCP SPT=41176 DPT=9100 SEQ=1293260929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76AF8360000000001030307) Oct 5 05:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:40:28 localhost systemd[1]: tmp-crun.XhdwpN.mount: Deactivated successfully. Oct 5 05:40:28 localhost podman[257204]: 2025-10-05 09:40:28.268420284 +0000 UTC m=+0.075863635 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:40:28 localhost podman[257204]: 2025-10-05 09:40:28.279181529 +0000 UTC m=+0.086624930 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:40:28 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:40:28 localhost podman[257203]: 2025-10-05 09:40:28.344870334 +0000 UTC m=+0.152007677 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:40:28 localhost podman[257203]: 2025-10-05 09:40:28.383040771 +0000 UTC m=+0.190178134 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute) Oct 5 05:40:28 localhost python3.9[257202]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:28 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:40:29 localhost python3.9[257353]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:29 localhost python3[257464]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Oct 5 05:40:30 localhost python3.9[257574]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:40:30 localhost podman[257577]: 2025-10-05 09:40:30.908118149 +0000 UTC m=+0.070269660 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:40:30 localhost podman[257577]: 2025-10-05 09:40:30.919095541 +0000 UTC m=+0.081247042 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9) Oct 5 05:40:30 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:40:31 localhost python3.9[257651]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12541 DF PROTO=TCP SPT=45870 DPT=9102 SEQ=1240466035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B07760000000001030307) Oct 5 05:40:32 localhost python3.9[257761]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:33 localhost python3.9[257818]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:33 localhost python3.9[257928]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42988 DF PROTO=TCP SPT=35290 DPT=9101 SEQ=3456734131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B0F830000000001030307) Oct 5 05:40:34 localhost python3.9[257985]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.397 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.398 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.398 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.399 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.399 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:40:35 localhost python3.9[258096]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:35 localhost nova_compute[238014]: 2025-10-05 09:40:35.849 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.058 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.059 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=13010MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.059 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.060 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:40:36 localhost python3.9[258174]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.115 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.115 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.131 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.582 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.587 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.755 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.758 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:40:36 localhost nova_compute[238014]: 2025-10-05 09:40:36.759 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:40:36 localhost podman[258271]: 2025-10-05 09:40:36.914513026 +0000 UTC m=+0.083228907 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:40:36 localhost podman[258271]: 2025-10-05 09:40:36.922059854 +0000 UTC m=+0.090775725 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:40:36 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:40:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42990 DF PROTO=TCP SPT=35290 DPT=9101 SEQ=3456734131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B1B770000000001030307) Oct 5 05:40:37 localhost python3.9[258322]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:37 localhost python3.9[258412]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759657236.383578-3542-38340168037920/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:37 localhost nova_compute[238014]: 2025-10-05 09:40:37.759 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:37 localhost nova_compute[238014]: 2025-10-05 09:40:37.760 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:37 localhost nova_compute[238014]: 2025-10-05 09:40:37.760 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.395 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.395 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.396 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:38 localhost nova_compute[238014]: 2025-10-05 09:40:38.396 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:40:38 localhost python3.9[258522]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:40:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:40:39 localhost python3.9[258632]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:40 localhost python3.9[258745]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:40 localhost nova_compute[238014]: 2025-10-05 09:40:40.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:40:40 localhost python3.9[258855]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:40 localhost podman[258856]: 2025-10-05 09:40:40.907105067 +0000 UTC m=+0.074954060 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:40:40 localhost podman[258856]: 2025-10-05 09:40:40.920081073 +0000 UTC m=+0.087930086 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:40:40 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:40:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42991 DF PROTO=TCP SPT=35290 DPT=9101 SEQ=3456734131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B2B360000000001030307) Oct 5 05:40:41 localhost nova_compute[238014]: 2025-10-05 09:40:41.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:40:41 localhost python3.9[258991]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:40:42 localhost python3.9[259103]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:40:42 localhost podman[259217]: 2025-10-05 09:40:42.928708457 +0000 UTC m=+0.092568853 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:40:42 localhost podman[259217]: 2025-10-05 09:40:42.934870536 +0000 UTC m=+0.098730912 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:40:42 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:40:42 localhost systemd[1]: tmp-crun.3o0edD.mount: Deactivated successfully. Oct 5 05:40:42 localhost podman[259218]: 2025-10-05 09:40:42.995990625 +0000 UTC m=+0.157506997 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:40:43 localhost podman[259218]: 2025-10-05 09:40:43.007000707 +0000 UTC m=+0.168517029 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:40:43 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:40:43 localhost python3.9[259216]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:40:43 localhost systemd[1]: session-59.scope: Deactivated successfully. Oct 5 05:40:43 localhost systemd[1]: session-59.scope: Consumed 32.506s CPU time. Oct 5 05:40:43 localhost systemd-logind[760]: Session 59 logged out. Waiting for processes to exit. Oct 5 05:40:43 localhost systemd-logind[760]: Removed session 59. Oct 5 05:40:46 localhost openstack_network_exporter[250246]: ERROR 09:40:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:40:46 localhost openstack_network_exporter[250246]: ERROR 09:40:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:40:46 localhost openstack_network_exporter[250246]: ERROR 09:40:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:40:46 localhost openstack_network_exporter[250246]: ERROR 09:40:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:40:46 localhost openstack_network_exporter[250246]: Oct 5 05:40:46 localhost openstack_network_exporter[250246]: ERROR 09:40:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:40:46 localhost openstack_network_exporter[250246]: Oct 5 05:40:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:40:48 localhost systemd[1]: tmp-crun.UVV6CW.mount: Deactivated successfully. Oct 5 05:40:48 localhost podman[259278]: 2025-10-05 09:40:48.898050207 +0000 UTC m=+0.068945905 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 5 05:40:48 localhost podman[259278]: 2025-10-05 09:40:48.936184333 +0000 UTC m=+0.107080061 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:40:48 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:40:49 localhost sshd[259304]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:40:49 localhost systemd-logind[760]: New session 60 of user zuul. Oct 5 05:40:49 localhost systemd[1]: Started Session 60 of User zuul. Oct 5 05:40:50 localhost python3.9[259417]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:51 localhost python3.9[259527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:51 localhost python3.9[259637]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:52 localhost python3.9[259745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:53 localhost python3.9[259831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657252.120235-107-185333140118467/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:54 localhost python3.9[259939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:54 localhost python3.9[260025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657253.5646877-152-81328296683508/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:54 localhost python3.9[260133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:55 localhost python3.9[260219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657254.5741563-152-182095614332305/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:56 localhost podman[248157]: time="2025-10-05T09:40:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:40:56 localhost podman[248157]: @ - - [05/Oct/2025:09:40:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 132068 "" "Go-http-client/1.1" Oct 5 05:40:56 localhost podman[248157]: @ - - [05/Oct/2025:09:40:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15990 "" "Go-http-client/1.1" Oct 5 05:40:56 localhost python3.9[260327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:57 localhost python3.9[260413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657255.5749917-152-222994639159509/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=c7e6a16284b1b75ddc7c8a6672acda445cebaa37 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:40:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:40:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:40:58 localhost podman[260522]: 2025-10-05 09:40:58.921981619 +0000 UTC m=+0.085942882 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:40:58 localhost python3.9[260521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:40:58 localhost podman[260522]: 2025-10-05 09:40:58.930930024 +0000 UTC m=+0.094891327 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 05:40:58 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:40:59 localhost podman[260523]: 2025-10-05 09:40:59.022042266 +0000 UTC m=+0.181879025 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:40:59 localhost podman[260523]: 2025-10-05 09:40:59.02908725 +0000 UTC m=+0.188924009 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:40:59 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:40:59 localhost python3.9[260651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657258.4753017-325-155330567027952/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=c1fe3b6875a03fe9c93d1be48aa23d16c4ec18ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:00 localhost python3.9[260759]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:41:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55642 DF PROTO=TCP SPT=49852 DPT=9102 SEQ=1120705413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B78A50000000001030307) Oct 5 05:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:41:01 localhost podman[260871]: 2025-10-05 09:41:01.767423623 +0000 UTC m=+0.082919797 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7) Oct 5 05:41:01 localhost podman[260871]: 2025-10-05 09:41:01.806214169 +0000 UTC m=+0.121710393 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, version=9.6) Oct 5 05:41:01 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:41:01 localhost python3.9[260872]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55643 DF PROTO=TCP SPT=49852 DPT=9102 SEQ=1120705413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B7CB60000000001030307) Oct 5 05:41:02 localhost python3.9[261000]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:03 localhost python3.9[261057]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:03 localhost python3.9[261167]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55644 DF PROTO=TCP SPT=49852 DPT=9102 SEQ=1120705413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B84B70000000001030307) Oct 5 05:41:04 localhost python3.9[261224]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:04 localhost python3.9[261334]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:05 localhost python3.9[261444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:06 localhost python3.9[261537]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:06 localhost podman[261670]: 2025-10-05 09:41:06.458089436 +0000 UTC m=+0.090034234 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, ceph=True, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=553, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:41:06 localhost podman[261670]: 2025-10-05 09:41:06.567165971 +0000 UTC m=+0.199110829 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, release=553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:41:06 localhost python3.9[261738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:41:07 localhost podman[261839]: 2025-10-05 09:41:07.066540936 +0000 UTC m=+0.088147892 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:41:07 localhost podman[261839]: 2025-10-05 09:41:07.105059894 +0000 UTC m=+0.126666850 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:41:07 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:41:07 localhost python3.9[261888]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55645 DF PROTO=TCP SPT=49852 DPT=9102 SEQ=1120705413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76B94760000000001030307) Oct 5 05:41:09 localhost python3.9[262053]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:41:09 localhost systemd[1]: Reloading. Oct 5 05:41:09 localhost systemd-sysv-generator[262082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:09 localhost systemd-rc-local-generator[262074]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:10 localhost python3.9[262201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:41:11 localhost podman[262259]: 2025-10-05 09:41:11.41465446 +0000 UTC m=+0.080736189 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:41:11 localhost podman[262259]: 2025-10-05 09:41:11.428644814 +0000 UTC m=+0.094726533 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:41:11 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:41:11 localhost python3.9[262258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:12 localhost python3.9[262390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:12 localhost python3.9[262447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:41:13 localhost systemd[1]: tmp-crun.vNSrFf.mount: Deactivated successfully. Oct 5 05:41:13 localhost podman[262558]: 2025-10-05 09:41:13.197948026 +0000 UTC m=+0.096230595 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid) Oct 5 05:41:13 localhost podman[262558]: 2025-10-05 09:41:13.206300345 +0000 UTC m=+0.104582974 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:41:13 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:41:13 localhost systemd[1]: tmp-crun.qJemI7.mount: Deactivated successfully. Oct 5 05:41:13 localhost podman[262559]: 2025-10-05 09:41:13.302459786 +0000 UTC m=+0.197898717 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:41:13 localhost podman[262559]: 2025-10-05 09:41:13.338622069 +0000 UTC m=+0.234060950 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:41:13 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:41:13 localhost python3.9[262557]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:41:13 localhost systemd[1]: Reloading. Oct 5 05:41:13 localhost systemd-sysv-generator[262622]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:13 localhost systemd-rc-local-generator[262617]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:13 localhost systemd[1]: Starting Create netns directory... Oct 5 05:41:13 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:41:13 localhost systemd[1]: Finished Create netns directory. Oct 5 05:41:15 localhost python3.9[262746]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:16 localhost python3.9[262856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:16 localhost openstack_network_exporter[250246]: ERROR 09:41:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:16 localhost openstack_network_exporter[250246]: ERROR 09:41:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:41:16 localhost openstack_network_exporter[250246]: ERROR 09:41:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:16 localhost openstack_network_exporter[250246]: ERROR 09:41:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:41:16 localhost openstack_network_exporter[250246]: Oct 5 05:41:16 localhost openstack_network_exporter[250246]: ERROR 09:41:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:41:16 localhost openstack_network_exporter[250246]: Oct 5 05:41:16 localhost python3.9[262944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759657275.8944283-736-204419792966969/.source.json _original_basename=.nog57phb follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:17 localhost python3.9[263054]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:41:19 localhost podman[263308]: 2025-10-05 09:41:19.916278114 +0000 UTC m=+0.078071855 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:41:19 localhost podman[263308]: 2025-10-05 09:41:19.990319507 +0000 UTC m=+0.152113228 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:41:20 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:41:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:41:20.376 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:41:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:41:20.377 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:41:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:41:20.377 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:41:20 localhost python3.9[263387]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Oct 5 05:41:21 localhost python3.9[263497]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:41:23 localhost python3.9[263608]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:41:26 localhost podman[248157]: time="2025-10-05T09:41:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:41:26 localhost podman[248157]: @ - - [05/Oct/2025:09:41:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 132068 "" "Go-http-client/1.1" Oct 5 05:41:26 localhost podman[248157]: @ - - [05/Oct/2025:09:41:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15984 "" "Go-http-client/1.1" Oct 5 05:41:27 localhost python3[263745]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:41:27 localhost podman[263782]: Oct 5 05:41:27 localhost podman[263782]: 2025-10-05 09:41:27.332680213 +0000 UTC m=+0.087058102 container create 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:41:27 localhost podman[263782]: 2025-10-05 09:41:27.291515443 +0000 UTC m=+0.045893372 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 5 05:41:27 localhost python3[263745]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Oct 5 05:41:28 localhost python3.9[263930]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:41:28 localhost python3.9[264042]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:41:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:41:29 localhost podman[264099]: 2025-10-05 09:41:29.312508386 +0000 UTC m=+0.081443418 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:41:29 localhost podman[264099]: 2025-10-05 09:41:29.320816944 +0000 UTC m=+0.089751946 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:41:29 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:41:29 localhost podman[264098]: 2025-10-05 09:41:29.373137862 +0000 UTC m=+0.142068713 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 05:41:29 localhost podman[264098]: 2025-10-05 09:41:29.389364367 +0000 UTC m=+0.158295228 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 05:41:29 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:41:29 localhost python3.9[264097]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:41:30 localhost python3.9[264248]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657289.4927876-1000-24588559889566/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:41:30 localhost python3.9[264303]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:41:30 localhost systemd[1]: Reloading. Oct 5 05:41:30 localhost systemd-sysv-generator[264331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:30 localhost systemd-rc-local-generator[264325]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=553 DF PROTO=TCP SPT=46364 DPT=9102 SEQ=1161665084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76BEDD60000000001030307) Oct 5 05:41:31 localhost python3.9[264394]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:41:31 localhost systemd[1]: Reloading. Oct 5 05:41:31 localhost systemd-sysv-generator[264426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:41:31 localhost systemd-rc-local-generator[264421]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:41:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:41:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=554 DF PROTO=TCP SPT=46364 DPT=9102 SEQ=1161665084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76BF1F60000000001030307) Oct 5 05:41:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:41:32 localhost systemd[1]: Starting neutron_sriov_agent container... Oct 5 05:41:32 localhost systemd[1]: tmp-crun.vXZhUZ.mount: Deactivated successfully. Oct 5 05:41:32 localhost podman[264433]: 2025-10-05 09:41:32.099927398 +0000 UTC m=+0.089399806 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.buildah.version=1.33.7) Oct 5 05:41:32 localhost podman[264433]: 2025-10-05 09:41:32.1633228 +0000 UTC m=+0.152795208 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:41:32 localhost systemd[1]: Started libcrun container. Oct 5 05:41:32 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:41:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a5d667ba31d0545f7a91c66da1ab8208e841c481e4ecb742e710cbf11372c7/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a5d667ba31d0545f7a91c66da1ab8208e841c481e4ecb742e710cbf11372c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:32 localhost podman[264441]: 2025-10-05 09:41:32.185083188 +0000 UTC m=+0.153707063 container init 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:41:32 localhost podman[264441]: 2025-10-05 09:41:32.193452708 +0000 UTC m=+0.162076583 container start 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251001, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:41:32 localhost podman[264441]: neutron_sriov_agent Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + sudo -E kolla_set_configs Oct 5 05:41:32 localhost systemd[1]: Started neutron_sriov_agent container. Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Validating config file Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Copying service configuration files Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Writing out command to execute Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: ++ cat /run_command Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + CMD=/usr/bin/neutron-sriov-nic-agent Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + ARGS= Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + sudo kolla_copy_cacerts Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + [[ ! -n '' ]] Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + . kolla_extend_start Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: Running command: '/usr/bin/neutron-sriov-nic-agent' Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + umask 0022 Oct 5 05:41:32 localhost neutron_sriov_agent[264468]: + exec /usr/bin/neutron-sriov-nic-agent Oct 5 05:41:33 localhost python3.9[264592]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:41:33 localhost systemd[1]: Stopping neutron_sriov_agent container... Oct 5 05:41:33 localhost systemd[1]: libpod-9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292.scope: Deactivated successfully. Oct 5 05:41:33 localhost systemd[1]: libpod-9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292.scope: Consumed 1.182s CPU time. Oct 5 05:41:33 localhost podman[264596]: 2025-10-05 09:41:33.391547301 +0000 UTC m=+0.075234368 container died 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:41:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292-userdata-shm.mount: Deactivated successfully. Oct 5 05:41:33 localhost systemd[1]: var-lib-containers-storage-overlay-03a5d667ba31d0545f7a91c66da1ab8208e841c481e4ecb742e710cbf11372c7-merged.mount: Deactivated successfully. Oct 5 05:41:33 localhost podman[264596]: 2025-10-05 09:41:33.437307258 +0000 UTC m=+0.120994305 container cleanup 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:41:33 localhost podman[264596]: neutron_sriov_agent Oct 5 05:41:33 localhost podman[264622]: 2025-10-05 09:41:33.498305973 +0000 UTC m=+0.038121568 container cleanup 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 05:41:33 localhost podman[264622]: neutron_sriov_agent Oct 5 05:41:33 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Oct 5 05:41:33 localhost systemd[1]: Stopped neutron_sriov_agent container. Oct 5 05:41:33 localhost systemd[1]: Starting neutron_sriov_agent container... Oct 5 05:41:33 localhost systemd[1]: Started libcrun container. Oct 5 05:41:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a5d667ba31d0545f7a91c66da1ab8208e841c481e4ecb742e710cbf11372c7/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a5d667ba31d0545f7a91c66da1ab8208e841c481e4ecb742e710cbf11372c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:41:33 localhost podman[264633]: 2025-10-05 09:41:33.621313111 +0000 UTC m=+0.090004973 container init 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=neutron_sriov_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:41:33 localhost podman[264633]: 2025-10-05 09:41:33.629929937 +0000 UTC m=+0.098621799 container start 9e6e5614f65e19c08688396a5fae54915f89c14e7eb38e8d8358589c0765f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'f8d882823924caee372bb5390e94bd3fb17e98a51fea0118121a9012cbb331fd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:41:33 localhost podman[264633]: neutron_sriov_agent Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + sudo -E kolla_set_configs Oct 5 05:41:33 localhost systemd[1]: Started neutron_sriov_agent container. Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Validating config file Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Copying service configuration files Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Writing out command to execute Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/456176946c9b2bc12efd840abf43863005adc00f003c5dd0716ca424d2bec219 Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: ++ cat /run_command Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + CMD=/usr/bin/neutron-sriov-nic-agent Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + ARGS= Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + sudo kolla_copy_cacerts Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + [[ ! -n '' ]] Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + . kolla_extend_start Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: Running command: '/usr/bin/neutron-sriov-nic-agent' Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + umask 0022 Oct 5 05:41:33 localhost neutron_sriov_agent[264647]: + exec /usr/bin/neutron-sriov-nic-agent Oct 5 05:41:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=555 DF PROTO=TCP SPT=46364 DPT=9102 SEQ=1161665084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76BF9F60000000001030307) Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.264 2 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.264 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.264 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.264 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.265 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.265 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.265 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005471152.localdomain'}#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.265 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-10e495e5-3772-4d24-a962-b3551c06ef0f - - - - - -] RPC agent_id: nic-switch-agent.np0005471152.localdomain#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.270 2 INFO neutron.agent.agent_extensions_manager [None req-10e495e5-3772-4d24-a962-b3551c06ef0f - - - - - -] Loaded agent extensions: ['qos']#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.270 2 INFO neutron.agent.agent_extensions_manager [None req-10e495e5-3772-4d24-a962-b3551c06ef0f - - - - - -] Initializing agent extension 'qos'#033[00m Oct 5 05:41:35 localhost systemd[1]: session-60.scope: Deactivated successfully. Oct 5 05:41:35 localhost systemd[1]: session-60.scope: Consumed 23.547s CPU time. Oct 5 05:41:35 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Oct 5 05:41:35 localhost systemd-logind[760]: Removed session 60. Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.665 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-10e495e5-3772-4d24-a962-b3551c06ef0f - - - - - -] Agent initialized successfully, now running... #033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.665 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-10e495e5-3772-4d24-a962-b3551c06ef0f - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Oct 5 05:41:35 localhost neutron_sriov_agent[264647]: 2025-10-05 09:41:35.665 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-10e495e5-3772-4d24-a962-b3551c06ef0f - - - - - -] Agent out of sync with plugin!#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.396 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.396 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.396 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.397 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.397 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:41:36 localhost nova_compute[238014]: 2025-10-05 09:41:36.845 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.022 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.023 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12898MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.023 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.024 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.086 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.086 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.421 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.871 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.876 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:41:37 localhost podman[264722]: 2025-10-05 09:41:37.906053005 +0000 UTC m=+0.074923699 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.907 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.910 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:41:37 localhost nova_compute[238014]: 2025-10-05 09:41:37.911 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:41:37 localhost podman[264722]: 2025-10-05 09:41:37.941152589 +0000 UTC m=+0.110023253 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Oct 5 05:41:37 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:41:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=556 DF PROTO=TCP SPT=46364 DPT=9102 SEQ=1161665084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76C09B60000000001030307) Oct 5 05:41:38 localhost nova_compute[238014]: 2025-10-05 09:41:38.911 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:38 localhost nova_compute[238014]: 2025-10-05 09:41:38.912 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:39 localhost nova_compute[238014]: 2025-10-05 09:41:39.373 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:39 localhost nova_compute[238014]: 2025-10-05 09:41:39.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:39 localhost nova_compute[238014]: 2025-10-05 09:41:39.376 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:41:39 localhost nova_compute[238014]: 2025-10-05 09:41:39.376 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:41:39 localhost nova_compute[238014]: 2025-10-05 09:41:39.391 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:41:39 localhost nova_compute[238014]: 2025-10-05 09:41:39.391 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:40 localhost nova_compute[238014]: 2025-10-05 09:41:40.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:40 localhost nova_compute[238014]: 2025-10-05 09:41:40.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:41:40 localhost sshd[264742]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:41:41 localhost systemd-logind[760]: New session 61 of user zuul. Oct 5 05:41:41 localhost systemd[1]: Started Session 61 of User zuul. Oct 5 05:41:41 localhost nova_compute[238014]: 2025-10-05 09:41:41.378 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:41:41 localhost podman[264854]: 2025-10-05 09:41:41.92179607 +0000 UTC m=+0.082722932 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:41:41 localhost podman[264854]: 2025-10-05 09:41:41.954847498 +0000 UTC m=+0.115774360 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:41:41 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:41:42 localhost python3.9[264853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:41:43 localhost nova_compute[238014]: 2025-10-05 09:41:43.379 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:43 localhost nova_compute[238014]: 2025-10-05 09:41:43.402 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:41:43 localhost python3.9[264989]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:41:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:41:43 localhost podman[264999]: 2025-10-05 09:41:43.912817661 +0000 UTC m=+0.065509350 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:41:43 localhost podman[264999]: 2025-10-05 09:41:43.95430093 +0000 UTC m=+0.106992569 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:41:43 localhost podman[264998]: 2025-10-05 09:41:43.965414115 +0000 UTC m=+0.119704439 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:41:43 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:41:43 localhost podman[264998]: 2025-10-05 09:41:43.974653489 +0000 UTC m=+0.128943803 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:41:43 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:41:44 localhost python3.9[265090]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:41:46 localhost openstack_network_exporter[250246]: ERROR 09:41:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:46 localhost openstack_network_exporter[250246]: ERROR 09:41:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:41:46 localhost openstack_network_exporter[250246]: ERROR 09:41:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:41:46 localhost openstack_network_exporter[250246]: ERROR 09:41:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:41:46 localhost openstack_network_exporter[250246]: Oct 5 05:41:46 localhost openstack_network_exporter[250246]: ERROR 09:41:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:41:46 localhost openstack_network_exporter[250246]: Oct 5 05:41:48 localhost python3.9[265202]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Oct 5 05:41:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:41:50 localhost systemd[1]: tmp-crun.EENzOz.mount: Deactivated successfully. Oct 5 05:41:50 localhost podman[265315]: 2025-10-05 09:41:50.528819089 +0000 UTC m=+0.100678276 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller) Oct 5 05:41:50 localhost python3.9[265316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:50 localhost podman[265315]: 2025-10-05 09:41:50.642593583 +0000 UTC m=+0.214452810 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 05:41:50 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:41:51 localhost python3.9[265450]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:52 localhost python3.9[265560]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:53 localhost python3.9[265670]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:53 localhost python3.9[265780]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:54 localhost python3.9[265890]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:54 localhost python3.9[266000]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:55 localhost python3.9[266110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:56 localhost podman[248157]: time="2025-10-05T09:41:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:41:56 localhost podman[248157]: @ - - [05/Oct/2025:09:41:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134026 "" "Go-http-client/1.1" Oct 5 05:41:56 localhost podman[248157]: @ - - [05/Oct/2025:09:41:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16423 "" "Go-http-client/1.1" Oct 5 05:41:56 localhost python3.9[266198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657315.2657406-280-169125051099653/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:57 localhost python3.9[266306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:57 localhost python3.9[266392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657316.7867677-325-223167050880843/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:58 localhost python3.9[266500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:58 localhost python3.9[266586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657317.9069397-325-146801176802223/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:41:59 localhost python3.9[266694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:41:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:41:59 localhost podman[266781]: 2025-10-05 09:41:59.919805378 +0000 UTC m=+0.086856407 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:41:59 localhost podman[266781]: 2025-10-05 09:41:59.933064872 +0000 UTC m=+0.100115911 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 05:41:59 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:41:59 localhost podman[266782]: 2025-10-05 09:41:59.998645573 +0000 UTC m=+0.161537548 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:42:00 localhost podman[266782]: 2025-10-05 09:42:00.010173729 +0000 UTC m=+0.173065724 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:42:00 localhost python3.9[266780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657318.9903655-325-98468250692167/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=ca89770cfe42c7272a3aa68acabe951061e231ab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:00 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:42:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34281 DF PROTO=TCP SPT=52018 DPT=9102 SEQ=4142556223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76C63050000000001030307) Oct 5 05:42:01 localhost python3.9[266931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34282 DF PROTO=TCP SPT=52018 DPT=9102 SEQ=4142556223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76C66F60000000001030307) Oct 5 05:42:02 localhost python3.9[267017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657320.7634244-499-242139302705114/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=c1fe3b6875a03fe9c93d1be48aa23d16c4ec18ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:42:02 localhost python3.9[267125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:02 localhost podman[267126]: 2025-10-05 09:42:02.92349926 +0000 UTC m=+0.094449385 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:42:02 localhost podman[267126]: 2025-10-05 09:42:02.943321994 +0000 UTC m=+0.114272059 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350) Oct 5 05:42:02 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:42:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34283 DF PROTO=TCP SPT=52018 DPT=9102 SEQ=4142556223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76C6EF70000000001030307) Oct 5 05:42:04 localhost python3.9[267231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657322.5001478-544-124387648358298/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:04 localhost python3.9[267339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:05 localhost python3.9[267425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657324.3174412-544-160586933750397/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:05 localhost python3.9[267533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:06 localhost python3.9[267588]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:06 localhost python3.9[267696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:07 localhost python3.9[267782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657326.5085156-631-29590223843116/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34284 DF PROTO=TCP SPT=52018 DPT=9102 SEQ=4142556223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76C7EB70000000001030307) Oct 5 05:42:08 localhost python3.9[267890]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:42:08 localhost podman[267927]: 2025-10-05 09:42:08.470974463 +0000 UTC m=+0.066299061 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:42:08 localhost podman[267927]: 2025-10-05 09:42:08.475877868 +0000 UTC m=+0.071202436 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 05:42:08 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:42:08 localhost python3.9[268056]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:09 localhost python3.9[268197]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:09 localhost python3.9[268254]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:10 localhost python3.9[268382]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:11 localhost python3.9[268439]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:12 localhost python3.9[268549]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:42:12 localhost systemd[1]: tmp-crun.GMhj4O.mount: Deactivated successfully. Oct 5 05:42:12 localhost podman[268660]: 2025-10-05 09:42:12.671469544 +0000 UTC m=+0.094320762 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:42:12 localhost podman[268660]: 2025-10-05 09:42:12.68409462 +0000 UTC m=+0.106945758 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:42:12 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:42:12 localhost python3.9[268659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:13 localhost python3.9[268739]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:42:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:42:14 localhost podman[268850]: 2025-10-05 09:42:14.504414862 +0000 UTC m=+0.081449558 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:42:14 localhost podman[268850]: 2025-10-05 09:42:14.513690006 +0000 UTC m=+0.090724702 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid) Oct 5 05:42:14 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:42:14 localhost podman[268851]: 2025-10-05 09:42:14.558146287 +0000 UTC m=+0.132313724 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:42:14 localhost podman[268851]: 2025-10-05 09:42:14.568033529 +0000 UTC m=+0.142200926 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:42:14 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:42:14 localhost python3.9[268849]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:15 localhost python3.9[268944]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:15 localhost python3.9[269054]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:42:15 localhost systemd[1]: Reloading. Oct 5 05:42:16 localhost systemd-rc-local-generator[269078]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:16 localhost systemd-sysv-generator[269084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:16 localhost openstack_network_exporter[250246]: ERROR 09:42:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:42:16 localhost openstack_network_exporter[250246]: ERROR 09:42:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:16 localhost openstack_network_exporter[250246]: ERROR 09:42:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:16 localhost openstack_network_exporter[250246]: ERROR 09:42:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:42:16 localhost openstack_network_exporter[250246]: Oct 5 05:42:16 localhost openstack_network_exporter[250246]: ERROR 09:42:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:42:16 localhost openstack_network_exporter[250246]: Oct 5 05:42:17 localhost python3.9[269203]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:17 localhost python3.9[269260]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:18 localhost python3.9[269370]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:18 localhost python3.9[269427]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:19 localhost python3.9[269537]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:42:19 localhost systemd[1]: Reloading. Oct 5 05:42:19 localhost systemd-sysv-generator[269565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:19 localhost systemd-rc-local-generator[269560]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:19 localhost systemd[1]: Starting Create netns directory... Oct 5 05:42:19 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:42:19 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:42:19 localhost systemd[1]: Finished Create netns directory. Oct 5 05:42:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:20.377 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:20.378 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:20.378 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:20 localhost python3.9[269689]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:42:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:42:20 localhost podman[269690]: 2025-10-05 09:42:20.91892221 +0000 UTC m=+0.077398541 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:42:20 localhost podman[269690]: 2025-10-05 09:42:20.958086718 +0000 UTC m=+0.116563029 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:42:20 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:42:21 localhost python3.9[269824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:42:22 localhost python3.9[269912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759657341.0684981-1075-139364665553320/.source.json _original_basename=.ibjhb5gm follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:23 localhost python3.9[270022]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:26 localhost podman[248157]: time="2025-10-05T09:42:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:42:26 localhost podman[248157]: @ - - [05/Oct/2025:09:42:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 134026 "" "Go-http-client/1.1" Oct 5 05:42:26 localhost podman[248157]: @ - - [05/Oct/2025:09:42:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16425 "" "Go-http-client/1.1" Oct 5 05:42:26 localhost python3.9[270330]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Oct 5 05:42:27 localhost python3.9[270440]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:42:28 localhost python3.9[270550]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:42:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:42:30 localhost podman[270594]: 2025-10-05 09:42:30.918352082 +0000 UTC m=+0.082768036 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 05:42:30 localhost podman[270594]: 2025-10-05 09:42:30.932591795 +0000 UTC m=+0.097007789 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:42:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47374 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=984257593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76CD8360000000001030307) Oct 5 05:42:30 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:42:31 localhost podman[270595]: 2025-10-05 09:42:31.021382655 +0000 UTC m=+0.182372070 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:42:31 localhost podman[270595]: 2025-10-05 09:42:31.060120521 +0000 UTC m=+0.221109926 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:42:31 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:42:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47375 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=984257593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76CDC360000000001030307) Oct 5 05:42:32 localhost python3[270725]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:42:32 localhost podman[270761]: Oct 5 05:42:32 localhost podman[270761]: 2025-10-05 09:42:32.721487405 +0000 UTC m=+0.092551750 container create 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=neutron_dhcp, tcib_managed=true, container_name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:42:32 localhost podman[270761]: 2025-10-05 09:42:32.675956523 +0000 UTC m=+0.047020858 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:42:32 localhost python3[270725]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:42:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:42:33 localhost systemd[1]: tmp-crun.RnxpTc.mount: Deactivated successfully. Oct 5 05:42:33 localhost podman[270910]: 2025-10-05 09:42:33.517482204 +0000 UTC m=+0.091721696 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal) Oct 5 05:42:33 localhost podman[270910]: 2025-10-05 09:42:33.53453024 +0000 UTC m=+0.108769772 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public) Oct 5 05:42:33 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:42:33 localhost python3.9[270909]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:42:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47376 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=984257593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76CE4360000000001030307) Oct 5 05:42:34 localhost python3.9[271039]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:35 localhost python3.9[271094]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:42:36 localhost python3.9[271203]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657355.5805118-1339-254302533659864/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.404 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.404 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.404 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.404 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.405 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:42:37 localhost python3.9[271258]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:42:37 localhost systemd[1]: Reloading. Oct 5 05:42:37 localhost systemd-rc-local-generator[271300]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:37 localhost systemd-sysv-generator[271307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:37 localhost nova_compute[238014]: 2025-10-05 09:42:37.919 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:42:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47377 DF PROTO=TCP SPT=35696 DPT=9102 SEQ=984257593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76CF3F70000000001030307) Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.159 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.162 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12895MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.162 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.163 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.235 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.236 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.257 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:42:38 localhost python3.9[271372]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.718 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.725 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:42:38 localhost systemd[1]: Reloading. Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.741 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.742 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:42:38 localhost nova_compute[238014]: 2025-10-05 09:42:38.742 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:38 localhost podman[271393]: 2025-10-05 09:42:38.789998085 +0000 UTC m=+0.098413271 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3) Oct 5 05:42:38 localhost podman[271393]: 2025-10-05 09:42:38.799180402 +0000 UTC m=+0.107595628 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:42:38 localhost systemd-rc-local-generator[271436]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:42:38 localhost systemd-sysv-generator[271442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.878 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:42:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:42:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:42:39 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:42:39 localhost systemd[1]: Starting neutron_dhcp_agent container... Oct 5 05:42:39 localhost systemd[1]: tmp-crun.QjgBG0.mount: Deactivated successfully. Oct 5 05:42:39 localhost systemd[1]: Started libcrun container. Oct 5 05:42:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceec0a5f1dd7c1577f868c1c8f100301de9483fd9096a85c0fae310a81b98d7e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceec0a5f1dd7c1577f868c1c8f100301de9483fd9096a85c0fae310a81b98d7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:39 localhost podman[271451]: 2025-10-05 09:42:39.258133617 +0000 UTC m=+0.113714804 container init 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 05:42:39 localhost podman[271451]: 2025-10-05 09:42:39.269257601 +0000 UTC m=+0.124838788 container start 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=neutron_dhcp_agent, managed_by=edpm_ansible) Oct 5 05:42:39 localhost podman[271451]: neutron_dhcp_agent Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + sudo -E kolla_set_configs Oct 5 05:42:39 localhost systemd[1]: Started neutron_dhcp_agent container. Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Validating config file Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Copying service configuration files Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Writing out command to execute Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/456176946c9b2bc12efd840abf43863005adc00f003c5dd0716ca424d2bec219 Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: ++ cat /run_command Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + CMD=/usr/bin/neutron-dhcp-agent Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + ARGS= Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + sudo kolla_copy_cacerts Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + [[ ! -n '' ]] Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + . kolla_extend_start Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: Running command: '/usr/bin/neutron-dhcp-agent' Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + umask 0022 Oct 5 05:42:39 localhost neutron_dhcp_agent[271466]: + exec /usr/bin/neutron-dhcp-agent Oct 5 05:42:39 localhost nova_compute[238014]: 2025-10-05 09:42:39.743 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost nova_compute[238014]: 2025-10-05 09:42:40.373 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost nova_compute[238014]: 2025-10-05 09:42:40.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost nova_compute[238014]: 2025-10-05 09:42:40.376 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:42:40 localhost nova_compute[238014]: 2025-10-05 09:42:40.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:42:40 localhost nova_compute[238014]: 2025-10-05 09:42:40.390 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:42:40 localhost nova_compute[238014]: 2025-10-05 09:42:40.390 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:40 localhost neutron_dhcp_agent[271466]: 2025-10-05 09:42:40.579 271470 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:42:40 localhost neutron_dhcp_agent[271466]: 2025-10-05 09:42:40.580 271470 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Oct 5 05:42:40 localhost neutron_dhcp_agent[271466]: 2025-10-05 09:42:40.946 271470 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 5 05:42:41 localhost python3.9[271591]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:42:41 localhost systemd[1]: Stopping neutron_dhcp_agent container... Oct 5 05:42:41 localhost nova_compute[238014]: 2025-10-05 09:42:41.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:41 localhost nova_compute[238014]: 2025-10-05 09:42:41.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:41 localhost nova_compute[238014]: 2025-10-05 09:42:41.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:42:41 localhost systemd[1]: libpod-4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210.scope: Deactivated successfully. Oct 5 05:42:41 localhost systemd[1]: libpod-4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210.scope: Consumed 1.998s CPU time. Oct 5 05:42:41 localhost podman[271595]: 2025-10-05 09:42:41.729459387 +0000 UTC m=+0.387921703 container died 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, tcib_managed=true, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 05:42:41 localhost podman[271595]: 2025-10-05 09:42:41.781845349 +0000 UTC m=+0.440307645 container cleanup 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible) Oct 5 05:42:41 localhost podman[271595]: neutron_dhcp_agent Oct 5 05:42:41 localhost podman[271632]: error opening file `/run/crun/4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210/status`: No such file or directory Oct 5 05:42:41 localhost podman[271620]: 2025-10-05 09:42:41.874101859 +0000 UTC m=+0.060067936 container cleanup 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:42:41 localhost podman[271620]: neutron_dhcp_agent Oct 5 05:42:41 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Oct 5 05:42:41 localhost systemd[1]: Stopped neutron_dhcp_agent container. Oct 5 05:42:41 localhost systemd[1]: Starting neutron_dhcp_agent container... Oct 5 05:42:42 localhost systemd[1]: Started libcrun container. Oct 5 05:42:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceec0a5f1dd7c1577f868c1c8f100301de9483fd9096a85c0fae310a81b98d7e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceec0a5f1dd7c1577f868c1c8f100301de9483fd9096a85c0fae310a81b98d7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:42 localhost podman[271634]: 2025-10-05 09:42:42.037710314 +0000 UTC m=+0.122690846 container init 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=neutron_dhcp_agent) Oct 5 05:42:42 localhost podman[271634]: 2025-10-05 09:42:42.046250622 +0000 UTC m=+0.131231164 container start 4143377536a563e71a2d3827bfc74bab1eb23ad2364d2d33508103320b673210 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1399e11ca369d867a495f265557b0a7ed8c6584e74c1feb0595079f35fbec6d9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_dhcp) Oct 5 05:42:42 localhost podman[271634]: neutron_dhcp_agent Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + sudo -E kolla_set_configs Oct 5 05:42:42 localhost systemd[1]: Started neutron_dhcp_agent container. Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Validating config file Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Copying service configuration files Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Writing out command to execute Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/external Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23 Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/456176946c9b2bc12efd840abf43863005adc00f003c5dd0716ca424d2bec219 Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: ++ cat /run_command Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + CMD=/usr/bin/neutron-dhcp-agent Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + ARGS= Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + sudo kolla_copy_cacerts Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + [[ ! -n '' ]] Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + . kolla_extend_start Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: Running command: '/usr/bin/neutron-dhcp-agent' Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + umask 0022 Oct 5 05:42:42 localhost neutron_dhcp_agent[271649]: + exec /usr/bin/neutron-dhcp-agent Oct 5 05:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:42:42 localhost podman[271681]: 2025-10-05 09:42:42.908895388 +0000 UTC m=+0.077029520 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:42:42 localhost podman[271681]: 2025-10-05 09:42:42.945243624 +0000 UTC m=+0.113377796 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:42:42 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:42:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:43.292 271653 INFO neutron.common.config [-] Logging enabled!#033[00m Oct 5 05:42:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:43.293 271653 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Oct 5 05:42:43 localhost systemd[1]: session-61.scope: Deactivated successfully. Oct 5 05:42:43 localhost systemd[1]: session-61.scope: Consumed 35.190s CPU time. Oct 5 05:42:43 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Oct 5 05:42:43 localhost systemd-logind[760]: Removed session 61. Oct 5 05:42:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:43.658 271653 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 5 05:42:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:44.354 271653 INFO neutron.agent.dhcp.agent [None req-af30db20-21f1-4c91-b716-aabf29d9ea6c - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 05:42:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:44.355 271653 INFO neutron.agent.dhcp.agent [-] Starting network cda0aa48-2690-46e0-99f3-e1922fca64be dhcp configuration#033[00m Oct 5 05:42:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:44.403 271653 INFO neutron.agent.dhcp.agent [-] Starting network 20d6a6dc-0f38-4a89-b3fc-56befd04e92f dhcp configuration#033[00m Oct 5 05:42:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:42:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:42:44 localhost podman[271706]: 2025-10-05 09:42:44.908816677 +0000 UTC m=+0.078059418 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 05:42:44 localhost podman[271706]: 2025-10-05 09:42:44.917459259 +0000 UTC m=+0.086702030 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:42:44 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:42:44 localhost podman[271707]: 2025-10-05 09:42:44.971015886 +0000 UTC m=+0.136845128 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:42:45 localhost podman[271707]: 2025-10-05 09:42:45.007447204 +0000 UTC m=+0.173276476 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:42:45 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:42:45 localhost nova_compute[238014]: 2025-10-05 09:42:45.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:42:45 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:45.880 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:42:45 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:45.882 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 05:42:45 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:45.883 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.249 271653 INFO oslo.privsep.daemon [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmponkd_k8o/privsep.sock']#033[00m Oct 5 05:42:46 localhost openstack_network_exporter[250246]: ERROR 09:42:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:42:46 localhost openstack_network_exporter[250246]: ERROR 09:42:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:46 localhost openstack_network_exporter[250246]: ERROR 09:42:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:42:46 localhost openstack_network_exporter[250246]: ERROR 09:42:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:42:46 localhost openstack_network_exporter[250246]: Oct 5 05:42:46 localhost openstack_network_exporter[250246]: ERROR 09:42:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:42:46 localhost openstack_network_exporter[250246]: Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.925 271653 INFO oslo.privsep.daemon [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.810 271749 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.816 271749 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.820 271749 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.820 271749 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271749#033[00m Oct 5 05:42:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:46.930 271653 WARNING oslo_privsep.priv_context [None req-ef300db1-dc1e-4347-a9ab-8ab90ad55202 - - - - - -] privsep daemon already running#033[00m Oct 5 05:42:47 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:47.463 271653 INFO oslo.privsep.daemon [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpb5r88gvm/privsep.sock']#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:48.083 271653 INFO oslo.privsep.daemon [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:47.972 271759 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:47.977 271759 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:47.980 271759 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:47.981 271759 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271759#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:48.087 271653 WARNING oslo_privsep.priv_context [None req-ef300db1-dc1e-4347-a9ab-8ab90ad55202 - - - - - -] privsep daemon already running#033[00m Oct 5 05:42:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:48.958 271653 INFO oslo.privsep.daemon [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpyg0qax4j/privsep.sock']#033[00m Oct 5 05:42:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:49.559 271653 INFO oslo.privsep.daemon [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:42:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:49.447 271775 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:42:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:49.453 271775 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:42:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:49.456 271775 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 5 05:42:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:49.457 271775 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271775#033[00m Oct 5 05:42:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:49.562 271653 WARNING oslo_privsep.priv_context [None req-ef300db1-dc1e-4347-a9ab-8ab90ad55202 - - - - - -] privsep daemon already running#033[00m Oct 5 05:42:50 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:50.894 271653 INFO neutron.agent.linux.ip_lib [None req-900b14f4-c4bb-4a01-9d07-6d1c55cfa58d - - - - - -] Device tap1eb1958a-da cannot be used as it has no MAC address#033[00m Oct 5 05:42:50 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:50.895 271653 INFO neutron.agent.linux.ip_lib [None req-ef300db1-dc1e-4347-a9ab-8ab90ad55202 - - - - - -] Device tap655b4855-8a cannot be used as it has no MAC address#033[00m Oct 5 05:42:50 localhost kernel: device tap1eb1958a-da entered promiscuous mode Oct 5 05:42:50 localhost NetworkManager[5970]: [1759657370.9956] manager: (tap1eb1958a-da): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Oct 5 05:42:51 localhost kernel: device tap655b4855-8a entered promiscuous mode Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:50Z|00025|binding|INFO|Claiming lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 for this chassis. Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:50Z|00026|binding|INFO|1eb1958a-da53-4c8f-aea8-41e19bfe5601: Claiming unknown Oct 5 05:42:51 localhost systemd-udevd[271799]: Network interface NamePolicy= disabled on kernel command line. Oct 5 05:42:51 localhost NetworkManager[5970]: [1759657371.0053] manager: (tap655b4855-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Oct 5 05:42:51 localhost systemd-udevd[271802]: Network interface NamePolicy= disabled on kernel command line. Oct 5 05:42:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00027|if_status|INFO|Not updating pb chassis for 655b4855-8a23-4f52-a891-0c41629acb4f now as sb is readonly Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-fe3fe5-0 Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-85ea67-0 Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-891f35-0 Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.025 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-cda0aa48-2690-46e0-99f3-e1922fca64be', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cda0aa48-2690-46e0-99f3-e1922fca64be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ec7882f-4ab2-4945-a460-196597f602b5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1eb1958a-da53-4c8f-aea8-41e19bfe5601) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.030 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 1eb1958a-da53-4c8f-aea8-41e19bfe5601 in datapath cda0aa48-2690-46e0-99f3-e1922fca64be bound to our chassis#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.033 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Port f8841526-3a20-41a8-89c0-62e4facfb943 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.034 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cda0aa48-2690-46e0-99f3-e1922fca64be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.035 163201 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpgi1o7gv3/privsep.sock']#033[00m Oct 5 05:42:51 localhost journal[237639]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, ) Oct 5 05:42:51 localhost journal[237639]: hostname: np0005471152.localdomain Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00031|binding|INFO|Claiming lport 655b4855-8a23-4f52-a891-0c41629acb4f for this chassis. Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00032|binding|INFO|655b4855-8a23-4f52-a891-0c41629acb4f: Claiming unknown Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.084 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20d6a6dc-0f38-4a89-b3fc-56befd04e92f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f49a96c-a4ec-4b07-9e41-306ef014a4cf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=655b4855-8a23-4f52-a891-0c41629acb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00033|binding|INFO|Setting lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 ovn-installed in OVS Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00034|binding|INFO|Setting lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 up in Southbound Oct 5 05:42:51 localhost journal[237639]: ethtool ioctl error on tap1eb1958a-da: No such device Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00035|binding|INFO|Setting lport 655b4855-8a23-4f52-a891-0c41629acb4f ovn-installed in OVS Oct 5 05:42:51 localhost ovn_controller[157556]: 2025-10-05T09:42:51Z|00036|binding|INFO|Setting lport 655b4855-8a23-4f52-a891-0c41629acb4f up in Southbound Oct 5 05:42:51 localhost podman[271803]: 2025-10-05 09:42:51.137168414 +0000 UTC m=+0.113224391 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:42:51 localhost podman[271803]: 2025-10-05 09:42:51.19619936 +0000 UTC m=+0.172255327 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true) Oct 5 05:42:51 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.600 163201 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.601 163201 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgi1o7gv3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.509 271895 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.513 271895 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.515 271895 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.515 271895 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271895#033[00m Oct 5 05:42:51 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:51.603 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9371e106-f0bd-4758-a9dd-a7a189a4c75e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.048 271895 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.048 271895 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.048 271895 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:42:52 localhost podman[271951]: Oct 5 05:42:52 localhost podman[271951]: 2025-10-05 09:42:52.111972569 +0000 UTC m=+0.097156394 container create a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.143 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[52ff0ec5-71a8-4a2c-a627-f100b5b4b447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.144 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 655b4855-8a23-4f52-a891-0c41629acb4f in datapath 20d6a6dc-0f38-4a89-b3fc-56befd04e92f unbound from our chassis#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.146 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4b9c866b-6656-4890-9a0e-a602ab5ecba0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.146 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20d6a6dc-0f38-4a89-b3fc-56befd04e92f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 05:42:52 localhost ovn_metadata_agent[163196]: 2025-10-05 09:42:52.147 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[59b6391d-6b10-4b19-af1e-b09004d441d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 05:42:52 localhost systemd[1]: Started libpod-conmon-a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3.scope. Oct 5 05:42:52 localhost podman[271967]: Oct 5 05:42:52 localhost podman[271967]: 2025-10-05 09:42:52.162505477 +0000 UTC m=+0.095086424 container create b0b3c8010c53b3b6fe2f1da43dd6a9d7c6371899896c6737554bcdad03ca9401 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:42:52 localhost podman[271951]: 2025-10-05 09:42:52.067030673 +0000 UTC m=+0.052214518 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:42:52 localhost systemd[1]: Started libcrun container. Oct 5 05:42:52 localhost systemd[1]: Started libpod-conmon-b0b3c8010c53b3b6fe2f1da43dd6a9d7c6371899896c6737554bcdad03ca9401.scope. Oct 5 05:42:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d47fc98d2566baffc148b2425b8ee9d1abe3404f528e542010b714b5b779d9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:52 localhost podman[271951]: 2025-10-05 09:42:52.210182313 +0000 UTC m=+0.195366128 container init a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:42:52 localhost systemd[1]: Started libcrun container. Oct 5 05:42:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200e6dac3a191b1f2303eef8e1636ee671b61e636c21396752d3be0459a33a35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 05:42:52 localhost podman[271951]: 2025-10-05 09:42:52.220005188 +0000 UTC m=+0.205189013 container start a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:42:52 localhost dnsmasq[271991]: started, version 2.85 cachesize 150 Oct 5 05:42:52 localhost dnsmasq[271991]: DNS service limited to local subnets Oct 5 05:42:52 localhost dnsmasq[271991]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 05:42:52 localhost dnsmasq[271991]: warning: no upstream servers configured Oct 5 05:42:52 localhost dnsmasq-dhcp[271991]: DHCP, static leases only on 192.168.122.0, lease time 1d Oct 5 05:42:52 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 05:42:52 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 05:42:52 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 05:42:52 localhost podman[271967]: 2025-10-05 09:42:52.227380652 +0000 UTC m=+0.159961599 container init b0b3c8010c53b3b6fe2f1da43dd6a9d7c6371899896c6737554bcdad03ca9401 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 05:42:52 localhost podman[271967]: 2025-10-05 09:42:52.13126849 +0000 UTC m=+0.063849497 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 05:42:52 localhost podman[271967]: 2025-10-05 09:42:52.235635562 +0000 UTC m=+0.168216519 container start b0b3c8010c53b3b6fe2f1da43dd6a9d7c6371899896c6737554bcdad03ca9401 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-20d6a6dc-0f38-4a89-b3fc-56befd04e92f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 05:42:52 localhost dnsmasq[271993]: started, version 2.85 cachesize 150 Oct 5 05:42:52 localhost dnsmasq[271993]: DNS service limited to local subnets Oct 5 05:42:52 localhost dnsmasq[271993]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 05:42:52 localhost dnsmasq[271993]: warning: no upstream servers configured Oct 5 05:42:52 localhost dnsmasq-dhcp[271993]: DHCP, static leases only on 192.168.0.0, lease time 1d Oct 5 05:42:52 localhost dnsmasq[271993]: read /var/lib/neutron/dhcp/20d6a6dc-0f38-4a89-b3fc-56befd04e92f/addn_hosts - 2 addresses Oct 5 05:42:52 localhost dnsmasq-dhcp[271993]: read /var/lib/neutron/dhcp/20d6a6dc-0f38-4a89-b3fc-56befd04e92f/host Oct 5 05:42:52 localhost dnsmasq-dhcp[271993]: read /var/lib/neutron/dhcp/20d6a6dc-0f38-4a89-b3fc-56befd04e92f/opts Oct 5 05:42:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:52.279 271653 INFO neutron.agent.dhcp.agent [None req-14dab2eb-9f10-46bf-8189-ba4e10f0ca78 - - - - - -] Finished network cda0aa48-2690-46e0-99f3-e1922fca64be dhcp configuration#033[00m Oct 5 05:42:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:52.297 271653 INFO neutron.agent.dhcp.agent [None req-2aededa3-1d2d-477e-acbf-68a8b37aa5cd - - - - - -] Finished network 20d6a6dc-0f38-4a89-b3fc-56befd04e92f dhcp configuration#033[00m Oct 5 05:42:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:52.298 271653 INFO neutron.agent.dhcp.agent [None req-af30db20-21f1-4c91-b716-aabf29d9ea6c - - - - - -] Synchronizing state complete#033[00m Oct 5 05:42:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:52.363 271653 INFO neutron.agent.dhcp.agent [None req-af30db20-21f1-4c91-b716-aabf29d9ea6c - - - - - -] DHCP agent started#033[00m Oct 5 05:42:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:52.518 271653 INFO neutron.agent.dhcp.agent [None req-524f9b36-db31-4766-9a6d-f5cdda5682c8 - - - - - -] DHCP configuration for ports {'220c66ec-c183-4fa3-847d-06fa876ccd15', 'b9ee97af-ea88-430c-9c1c-aa81648e44db', '9a7a5af1-bb98-40ce-b1bc-45738e2a191a'} is completed#033[00m Oct 5 05:42:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 09:42:52.826 271653 INFO neutron.agent.dhcp.agent [None req-be290242-0357-4965-a2a0-1b540c9e14be - - - - - -] DHCP configuration for ports {'6395442e-23c3-4111-bf46-0a1e94880ba3', '9a7a5af1-bb98-40ce-b1bc-45738e2a191a', '220c66ec-c183-4fa3-847d-06fa876ccd15', '4db5c636-3094-4e86-9093-8123489e64be', 'b9ee97af-ea88-430c-9c1c-aa81648e44db', 'cd4e79ca-7111-4d41-b9b0-672ba46474d1'} is completed#033[00m Oct 5 05:42:53 localhost systemd[1]: tmp-crun.XV6WVL.mount: Deactivated successfully. Oct 5 05:42:56 localhost podman[248157]: time="2025-10-05T09:42:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:42:56 localhost podman[248157]: @ - - [05/Oct/2025:09:42:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139981 "" "Go-http-client/1.1" Oct 5 05:42:56 localhost podman[248157]: @ - - [05/Oct/2025:09:42:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17810 "" "Go-http-client/1.1" Oct 5 05:43:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47085 DF PROTO=TCP SPT=49022 DPT=9102 SEQ=2996713876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76D4D660000000001030307) Oct 5 05:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:43:01 localhost systemd[1]: tmp-crun.yDeeBW.mount: Deactivated successfully. Oct 5 05:43:01 localhost podman[271995]: 2025-10-05 09:43:01.932081708 +0000 UTC m=+0.101721946 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:43:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47086 DF PROTO=TCP SPT=49022 DPT=9102 SEQ=2996713876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76D51770000000001030307) Oct 5 05:43:01 localhost podman[271995]: 2025-10-05 09:43:01.995870952 +0000 UTC m=+0.165511200 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:43:02 localhost podman[271996]: 2025-10-05 09:43:02.007484989 +0000 UTC m=+0.174381047 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:43:02 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:43:02 localhost podman[271996]: 2025-10-05 09:43:02.043064304 +0000 UTC m=+0.209960312 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:43:02 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:43:03 localhost podman[272037]: 2025-10-05 09:43:03.91310956 +0000 UTC m=+0.078911914 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, vcs-type=git, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6) Oct 5 05:43:03 localhost podman[272037]: 2025-10-05 09:43:03.927011894 +0000 UTC m=+0.092814248 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public) Oct 5 05:43:03 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:43:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47087 DF PROTO=TCP SPT=49022 DPT=9102 SEQ=2996713876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76D59770000000001030307) Oct 5 05:43:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47088 DF PROTO=TCP SPT=49022 DPT=9102 SEQ=2996713876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76D69370000000001030307) Oct 5 05:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:43:09 localhost podman[272059]: 2025-10-05 09:43:09.921877576 +0000 UTC m=+0.086264498 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:43:09 localhost podman[272059]: 2025-10-05 09:43:09.957198923 +0000 UTC m=+0.121585835 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:43:09 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:43:13 localhost podman[272161]: 2025-10-05 09:43:13.925023543 +0000 UTC m=+0.091131598 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:43:13 localhost podman[272161]: 2025-10-05 09:43:13.96034529 +0000 UTC m=+0.126453345 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:43:13 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:43:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:43:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:43:15 localhost podman[272184]: 2025-10-05 09:43:15.919918609 +0000 UTC m=+0.086048260 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:43:15 localhost podman[272184]: 2025-10-05 09:43:15.95917359 +0000 UTC m=+0.125303221 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 05:43:15 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:43:15 localhost podman[272185]: 2025-10-05 09:43:15.980263723 +0000 UTC m=+0.142855212 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 5 05:43:16 localhost podman[272185]: 2025-10-05 09:43:16.02115059 +0000 UTC m=+0.183742119 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd) Oct 5 05:43:16 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:43:16 localhost openstack_network_exporter[250246]: ERROR 09:43:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:43:16 localhost openstack_network_exporter[250246]: ERROR 09:43:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:16 localhost openstack_network_exporter[250246]: ERROR 09:43:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:16 localhost openstack_network_exporter[250246]: ERROR 09:43:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:43:16 localhost openstack_network_exporter[250246]: Oct 5 05:43:16 localhost openstack_network_exporter[250246]: ERROR 09:43:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:43:16 localhost openstack_network_exporter[250246]: Oct 5 05:43:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:43:20.378 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:43:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:43:20.381 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:43:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:43:20.381 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:43:21 localhost ovn_controller[157556]: 2025-10-05T09:43:21Z|00037|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory Oct 5 05:43:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:43:21 localhost podman[272220]: 2025-10-05 09:43:21.922254979 +0000 UTC m=+0.087324788 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 05:43:21 localhost podman[272220]: 2025-10-05 09:43:21.968207694 +0000 UTC m=+0.133277453 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:43:21 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:43:26 localhost podman[248157]: time="2025-10-05T09:43:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:43:26 localhost podman[248157]: @ - - [05/Oct/2025:09:43:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139981 "" "Go-http-client/1.1" Oct 5 05:43:26 localhost podman[248157]: @ - - [05/Oct/2025:09:43:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17824 "" "Go-http-client/1.1" Oct 5 05:43:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10870 DF PROTO=TCP SPT=41870 DPT=9102 SEQ=2756153470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76DC2970000000001030307) Oct 5 05:43:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10871 DF PROTO=TCP SPT=41870 DPT=9102 SEQ=2756153470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76DC6B60000000001030307) Oct 5 05:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:43:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:43:32 localhost podman[272246]: 2025-10-05 09:43:32.925382125 +0000 UTC m=+0.093291503 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:43:32 localhost podman[272246]: 2025-10-05 09:43:32.941209544 +0000 UTC m=+0.109118912 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:43:32 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:43:33 localhost podman[272247]: 2025-10-05 09:43:33.0295187 +0000 UTC m=+0.190292960 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:43:33 localhost podman[272247]: 2025-10-05 09:43:33.036643157 +0000 UTC m=+0.197417407 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:43:33 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:43:33 localhost sshd[272289]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:43:33 localhost systemd-logind[760]: New session 62 of user zuul. Oct 5 05:43:33 localhost systemd[1]: Started Session 62 of User zuul. Oct 5 05:43:33 localhost nova_compute[238014]: 2025-10-05 09:43:33.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10872 DF PROTO=TCP SPT=41870 DPT=9102 SEQ=2756153470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76DCEB60000000001030307) Oct 5 05:43:34 localhost python3.9[272400]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:43:34 localhost podman[272422]: 2025-10-05 09:43:34.935974985 +0000 UTC m=+0.099076509 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 05:43:34 localhost podman[272422]: 2025-10-05 09:43:34.976054741 +0000 UTC m=+0.139156235 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 05:43:34 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:43:35 localhost nova_compute[238014]: 2025-10-05 09:43:35.662 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:35 localhost python3.9[272535]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:36 localhost python3.9[272645]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:37 localhost python3.9[272755]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:37 localhost python3.9[272865]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:43:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10873 DF PROTO=TCP SPT=41870 DPT=9102 SEQ=2756153470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76DDE770000000001030307) Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.379 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.379 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.404 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.404 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.405 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.405 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.406 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:43:38 localhost python3.9[272975]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:38 localhost nova_compute[238014]: 2025-10-05 09:43:38.872 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.107 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.109 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12458MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.109 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.110 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.396 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.397 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.655 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.675 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.675 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.688 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.713 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:43:39 localhost nova_compute[238014]: 2025-10-05 09:43:39.729 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:43:39 localhost python3.9[273107]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.211 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.219 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.234 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.236 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.237 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.238 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.238 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:43:40 localhost nova_compute[238014]: 2025-10-05 09:43:40.256 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:43:40 localhost systemd[1]: tmp-crun.Lv8zPb.mount: Deactivated successfully. Oct 5 05:43:40 localhost podman[273242]: 2025-10-05 09:43:40.66352861 +0000 UTC m=+0.102625993 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:43:40 localhost podman[273242]: 2025-10-05 09:43:40.701086731 +0000 UTC m=+0.140184134 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:43:40 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:43:40 localhost python3.9[273241]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.256 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.257 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.257 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.257 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.299 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.300 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.300 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.301 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:42 localhost nova_compute[238014]: 2025-10-05 09:43:42.301 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:43:43 localhost python3.9[273369]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:43:43 localhost network[273386]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:43:43 localhost network[273387]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:43:43 localhost network[273388]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:43:44 localhost podman[273395]: 2025-10-05 09:43:44.334058683 +0000 UTC m=+0.086681099 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:43:44 localhost podman[273395]: 2025-10-05 09:43:44.342954342 +0000 UTC m=+0.095576828 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:43:44 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:43:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:43:45 localhost nova_compute[238014]: 2025-10-05 09:43:45.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:45 localhost nova_compute[238014]: 2025-10-05 09:43:45.377 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:43:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:43:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:43:46 localhost podman[273502]: 2025-10-05 09:43:46.105154916 +0000 UTC m=+0.092630423 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2) Oct 5 05:43:46 localhost podman[273502]: 2025-10-05 09:43:46.117197807 +0000 UTC m=+0.104673304 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:43:46 localhost podman[273516]: 2025-10-05 09:43:46.15519784 +0000 UTC m=+0.083348602 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible) Oct 5 05:43:46 localhost podman[273516]: 2025-10-05 09:43:46.162313297 +0000 UTC m=+0.090464039 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:43:46 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:43:46 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:43:46 localhost openstack_network_exporter[250246]: ERROR 09:43:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:43:46 localhost openstack_network_exporter[250246]: ERROR 09:43:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:46 localhost openstack_network_exporter[250246]: ERROR 09:43:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:43:46 localhost openstack_network_exporter[250246]: ERROR 09:43:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:43:46 localhost openstack_network_exporter[250246]: Oct 5 05:43:46 localhost openstack_network_exporter[250246]: ERROR 09:43:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:43:46 localhost openstack_network_exporter[250246]: Oct 5 05:43:47 localhost nova_compute[238014]: 2025-10-05 09:43:47.409 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:48 localhost nova_compute[238014]: 2025-10-05 09:43:48.372 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:43:48 localhost python3.9[273682]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:43:49 localhost python3.9[273792]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:43:50 localhost python3.9[273904]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:51 localhost python3.9[274014]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:43:52 localhost systemd[1]: tmp-crun.4qPWR4.mount: Deactivated successfully. Oct 5 05:43:52 localhost podman[274125]: 2025-10-05 09:43:52.412069485 +0000 UTC m=+0.098824433 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:43:52 localhost podman[274125]: 2025-10-05 09:43:52.460190703 +0000 UTC m=+0.146945651 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:43:52 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:43:52 localhost python3.9[274124]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:52 localhost python3.9[274206]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:54 localhost python3.9[274316]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:54 localhost python3.9[274373]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:43:55 localhost python3.9[274483]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:56 localhost podman[248157]: time="2025-10-05T09:43:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:43:56 localhost podman[248157]: @ - - [05/Oct/2025:09:43:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139981 "" "Go-http-client/1.1" Oct 5 05:43:56 localhost podman[248157]: @ - - [05/Oct/2025:09:43:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17827 "" "Go-http-client/1.1" Oct 5 05:43:56 localhost python3.9[274593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:56 localhost python3.9[274650]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:57 localhost python3.9[274760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:43:57 localhost python3.9[274817]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:43:58 localhost python3.9[274927]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:43:58 localhost systemd[1]: Reloading. Oct 5 05:43:58 localhost systemd-rc-local-generator[274945]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:43:58 localhost systemd-sysv-generator[274952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:43:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:43:59 localhost python3.9[275074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:00 localhost python3.9[275131]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57667 DF PROTO=TCP SPT=60234 DPT=9102 SEQ=1491287044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76E37C50000000001030307) Oct 5 05:44:01 localhost python3.9[275241]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:01 localhost python3.9[275298]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57668 DF PROTO=TCP SPT=60234 DPT=9102 SEQ=1491287044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76E3BB60000000001030307) Oct 5 05:44:02 localhost python3.9[275408]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:02 localhost systemd[1]: Reloading. Oct 5 05:44:02 localhost systemd-sysv-generator[275436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:44:02 localhost systemd-rc-local-generator[275431]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:44:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:02 localhost systemd[1]: Starting Create netns directory... Oct 5 05:44:02 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:44:02 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:44:02 localhost systemd[1]: Finished Create netns directory. Oct 5 05:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:44:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:44:03 localhost systemd[1]: tmp-crun.xGltrG.mount: Deactivated successfully. Oct 5 05:44:03 localhost podman[275561]: 2025-10-05 09:44:03.893943541 +0000 UTC m=+0.151367039 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:44:03 localhost podman[275560]: 2025-10-05 09:44:03.853946989 +0000 UTC m=+0.115654221 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:44:03 localhost podman[275560]: 2025-10-05 09:44:03.937221979 +0000 UTC m=+0.198929151 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:44:03 localhost python3.9[275559]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:03 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:44:03 localhost podman[275561]: 2025-10-05 09:44:03.966004185 +0000 UTC m=+0.223427623 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:44:03 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:44:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57669 DF PROTO=TCP SPT=60234 DPT=9102 SEQ=1491287044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76E43B60000000001030307) Oct 5 05:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:44:05 localhost podman[275711]: 2025-10-05 09:44:05.16966792 +0000 UTC m=+0.089067910 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container) Oct 5 05:44:05 localhost podman[275711]: 2025-10-05 09:44:05.18209451 +0000 UTC m=+0.101494480 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container) Oct 5 05:44:05 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:44:05 localhost python3.9[275710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:05 localhost python3.9[275785]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/iscsid/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/iscsid/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:07 localhost python3.9[275895]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:08 localhost python3.9[276005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57670 DF PROTO=TCP SPT=60234 DPT=9102 SEQ=1491287044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76E53770000000001030307) Oct 5 05:44:08 localhost python3.9[276062]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/iscsid.json _original_basename=.avgrry9b recurse=False state=file path=/var/lib/kolla/config_files/iscsid.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:09 localhost python3.9[276172]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:44:10 localhost systemd[1]: tmp-crun.Tdyc9B.mount: Deactivated successfully. Oct 5 05:44:10 localhost podman[276371]: 2025-10-05 09:44:10.936974929 +0000 UTC m=+0.102390826 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:44:10 localhost podman[276371]: 2025-10-05 09:44:10.967927989 +0000 UTC m=+0.133343906 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:44:10 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:44:11 localhost python3.9[276467]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False Oct 5 05:44:12 localhost python3.9[276613]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:44:13 localhost python3.9[276756]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:44:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:44:14 localhost podman[276801]: 2025-10-05 09:44:14.954867306 +0000 UTC m=+0.084218858 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:44:14 localhost podman[276801]: 2025-10-05 09:44:14.990953795 +0000 UTC m=+0.120305347 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:44:15 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:44:16 localhost openstack_network_exporter[250246]: ERROR 09:44:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:16 localhost openstack_network_exporter[250246]: ERROR 09:44:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:16 localhost openstack_network_exporter[250246]: ERROR 09:44:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:44:16 localhost openstack_network_exporter[250246]: ERROR 09:44:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:44:16 localhost openstack_network_exporter[250246]: Oct 5 05:44:16 localhost openstack_network_exporter[250246]: ERROR 09:44:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:44:16 localhost openstack_network_exporter[250246]: Oct 5 05:44:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:44:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:44:16 localhost systemd[1]: tmp-crun.42pz4c.mount: Deactivated successfully. Oct 5 05:44:16 localhost podman[276842]: 2025-10-05 09:44:16.94589875 +0000 UTC m=+0.130927216 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:44:16 localhost podman[276842]: 2025-10-05 09:44:16.954746257 +0000 UTC m=+0.139774703 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid) Oct 5 05:44:16 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:44:16 localhost podman[276843]: 2025-10-05 09:44:16.918975077 +0000 UTC m=+0.102720456 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:44:16 localhost podman[276843]: 2025-10-05 09:44:16.998717864 +0000 UTC m=+0.182463213 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:44:17 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:44:17 localhost python3[276972]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:44:17 localhost python3[276972]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "777353c8928aa59ae2473c1d38acf1eefa9a0dfeca7b821fed936f9ff9383648",#012 "Digest": "sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:3ec0a9b9c48d1a633c4ec38a126dcd9e46ea9b27d706d3382d04e2097a666bce"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:14:31.883735142Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 403870347,#012 "VirtualSize": 403870347,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/33fb6a56eff879427f2ffe95b5c195f908b1efd66935c01c0a5cfc7e3e2b920e/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:5517f28613540e56901977cf7926b9c77e610f33e0d02e83afbce9137bbc7d2a"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which Oct 5 05:44:18 localhost python3.9[277145]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:19 localhost python3.9[277257]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:20 localhost python3.9[277312]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:44:20.380 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:44:20.381 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:44:20.381 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:20 localhost python3.9[277421]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657460.0750709-988-8609384569018/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:21 localhost python3.9[277476]: ansible-systemd Invoked with state=started name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:22 localhost python3.9[277586]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:44:22 localhost systemd[1]: tmp-crun.JLObEY.mount: Deactivated successfully. Oct 5 05:44:22 localhost podman[277699]: 2025-10-05 09:44:22.892904769 +0000 UTC m=+0.092267993 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:44:22 localhost podman[277699]: 2025-10-05 09:44:22.93425192 +0000 UTC m=+0.133615154 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller) Oct 5 05:44:22 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:44:23 localhost python3.9[277698]: ansible-ansible.builtin.systemd Invoked with name=edpm_iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:44:23 localhost systemd[1]: Stopping iscsid container... Oct 5 05:44:23 localhost iscsid[217258]: iscsid shutting down. Oct 5 05:44:23 localhost systemd[1]: libpod-289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.scope: Deactivated successfully. Oct 5 05:44:23 localhost podman[277728]: 2025-10-05 09:44:23.230758076 +0000 UTC m=+0.072319153 container died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible) Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.timer: Deactivated successfully. Oct 5 05:44:23 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Failed to open /run/systemd/transient/289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: No such file or directory Oct 5 05:44:23 localhost podman[277728]: 2025-10-05 09:44:23.323856491 +0000 UTC m=+0.165417538 container cleanup 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:44:23 localhost podman[277728]: iscsid Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.timer: Failed to open /run/systemd/transient/289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.timer: No such file or directory Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Failed to open /run/systemd/transient/289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: No such file or directory Oct 5 05:44:23 localhost podman[277756]: 2025-10-05 09:44:23.410722365 +0000 UTC m=+0.050054526 container cleanup 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:44:23 localhost podman[277756]: iscsid Oct 5 05:44:23 localhost systemd[1]: edpm_iscsid.service: Deactivated successfully. Oct 5 05:44:23 localhost systemd[1]: Stopped iscsid container. Oct 5 05:44:23 localhost systemd[1]: Starting iscsid container... Oct 5 05:44:23 localhost systemd[1]: Started libcrun container. Oct 5 05:44:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5173e5f395dd54856819bdabb620f2befcebd20cb09d8886a7d6a40aaadc39/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:44:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5173e5f395dd54856819bdabb620f2befcebd20cb09d8886a7d6a40aaadc39/merged/etc/target supports timestamps until 2038 (0x7fffffff) Oct 5 05:44:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5173e5f395dd54856819bdabb620f2befcebd20cb09d8886a7d6a40aaadc39/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.timer: Failed to open /run/systemd/transient/289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.timer: No such file or directory Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Failed to open /run/systemd/transient/289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: No such file or directory Oct 5 05:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:44:23 localhost podman[277769]: 2025-10-05 09:44:23.580151178 +0000 UTC m=+0.134898970 container init 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true) Oct 5 05:44:23 localhost iscsid[277783]: + sudo -E kolla_set_configs Oct 5 05:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:44:23 localhost podman[277769]: 2025-10-05 09:44:23.622520959 +0000 UTC m=+0.177268751 container start 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 05:44:23 localhost podman[277769]: iscsid Oct 5 05:44:23 localhost systemd[1]: Started iscsid container. Oct 5 05:44:23 localhost systemd[1]: Created slice User Slice of UID 0. Oct 5 05:44:23 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Oct 5 05:44:23 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Oct 5 05:44:23 localhost systemd[1]: Starting User Manager for UID 0... Oct 5 05:44:23 localhost podman[277791]: 2025-10-05 09:44:23.72269761 +0000 UTC m=+0.092117267 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid) Oct 5 05:44:23 localhost podman[277791]: 2025-10-05 09:44:23.732545066 +0000 UTC m=+0.101964683 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:44:23 localhost podman[277791]: unhealthy Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Main process exited, code=exited, status=1/FAILURE Oct 5 05:44:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Failed with result 'exit-code'. Oct 5 05:44:23 localhost systemd[277803]: Queued start job for default target Main User Target. Oct 5 05:44:23 localhost systemd[277803]: Created slice User Application Slice. Oct 5 05:44:23 localhost systemd[277803]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Oct 5 05:44:23 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Oct 5 05:44:23 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:44:23 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:44:23 localhost systemd[277803]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:44:23 localhost systemd[277803]: Reached target Paths. Oct 5 05:44:23 localhost systemd[277803]: Reached target Timers. Oct 5 05:44:23 localhost systemd[277803]: Starting D-Bus User Message Bus Socket... Oct 5 05:44:23 localhost systemd[277803]: Starting Create User's Volatile Files and Directories... Oct 5 05:44:23 localhost systemd[277803]: Finished Create User's Volatile Files and Directories. Oct 5 05:44:23 localhost systemd[277803]: Listening on D-Bus User Message Bus Socket. Oct 5 05:44:23 localhost systemd[277803]: Reached target Sockets. Oct 5 05:44:23 localhost systemd[277803]: Reached target Basic System. Oct 5 05:44:23 localhost systemd[277803]: Reached target Main User Target. Oct 5 05:44:23 localhost systemd[277803]: Startup finished in 145ms. Oct 5 05:44:23 localhost systemd[1]: Started User Manager for UID 0. Oct 5 05:44:23 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:44:23 localhost systemd[1]: Started Session c15 of User root. Oct 5 05:44:23 localhost iscsid[277783]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:44:23 localhost iscsid[277783]: INFO:__main__:Validating config file Oct 5 05:44:23 localhost iscsid[277783]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:44:23 localhost iscsid[277783]: INFO:__main__:Writing out command to execute Oct 5 05:44:23 localhost systemd[1]: session-c15.scope: Deactivated successfully. Oct 5 05:44:23 localhost iscsid[277783]: ++ cat /run_command Oct 5 05:44:23 localhost iscsid[277783]: + CMD='/usr/sbin/iscsid -f' Oct 5 05:44:23 localhost iscsid[277783]: + ARGS= Oct 5 05:44:23 localhost iscsid[277783]: + sudo kolla_copy_cacerts Oct 5 05:44:24 localhost systemd[1]: Started Session c16 of User root. Oct 5 05:44:24 localhost systemd[1]: session-c16.scope: Deactivated successfully. Oct 5 05:44:24 localhost iscsid[277783]: + [[ ! -n '' ]] Oct 5 05:44:24 localhost iscsid[277783]: + . kolla_extend_start Oct 5 05:44:24 localhost iscsid[277783]: Running command: '/usr/sbin/iscsid -f' Oct 5 05:44:24 localhost iscsid[277783]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]] Oct 5 05:44:24 localhost iscsid[277783]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\''' Oct 5 05:44:24 localhost iscsid[277783]: + umask 0022 Oct 5 05:44:24 localhost iscsid[277783]: + exec /usr/sbin/iscsid -f Oct 5 05:44:25 localhost python3.9[277941]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:26 localhost podman[248157]: time="2025-10-05T09:44:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:44:26 localhost podman[248157]: @ - - [05/Oct/2025:09:44:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139980 "" "Go-http-client/1.1" Oct 5 05:44:26 localhost podman[248157]: @ - - [05/Oct/2025:09:44:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17815 "" "Go-http-client/1.1" Oct 5 05:44:26 localhost python3.9[278051]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:44:26 localhost network[278068]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:44:26 localhost network[278069]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:44:26 localhost network[278070]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:44:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20381 DF PROTO=TCP SPT=53798 DPT=9102 SEQ=618278897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76EACF50000000001030307) Oct 5 05:44:31 localhost python3.9[278305]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:44:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20382 DF PROTO=TCP SPT=53798 DPT=9102 SEQ=618278897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76EB0F60000000001030307) Oct 5 05:44:32 localhost python3.9[278415]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Oct 5 05:44:32 localhost python3.9[278525]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:33 localhost python3.9[278582]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:33 localhost python3.9[278692]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20383 DF PROTO=TCP SPT=53798 DPT=9102 SEQ=618278897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76EB8F60000000001030307) Oct 5 05:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:44:34 localhost systemd[1]: Stopping User Manager for UID 0... Oct 5 05:44:34 localhost systemd[277803]: Activating special unit Exit the Session... Oct 5 05:44:34 localhost systemd[277803]: Stopped target Main User Target. Oct 5 05:44:34 localhost systemd[277803]: Stopped target Basic System. Oct 5 05:44:34 localhost systemd[277803]: Stopped target Paths. Oct 5 05:44:34 localhost systemd[277803]: Stopped target Sockets. Oct 5 05:44:34 localhost systemd[277803]: Stopped target Timers. Oct 5 05:44:34 localhost systemd[277803]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:44:34 localhost systemd[277803]: Closed D-Bus User Message Bus Socket. Oct 5 05:44:34 localhost systemd[277803]: Stopped Create User's Volatile Files and Directories. Oct 5 05:44:34 localhost systemd[277803]: Removed slice User Application Slice. Oct 5 05:44:34 localhost systemd[277803]: Reached target Shutdown. Oct 5 05:44:34 localhost systemd[277803]: Finished Exit the Session. Oct 5 05:44:34 localhost systemd[277803]: Reached target Exit the Session. Oct 5 05:44:34 localhost systemd[1]: user@0.service: Deactivated successfully. Oct 5 05:44:34 localhost systemd[1]: Stopped User Manager for UID 0. Oct 5 05:44:34 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Oct 5 05:44:34 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Oct 5 05:44:34 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Oct 5 05:44:34 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Oct 5 05:44:34 localhost systemd[1]: Removed slice User Slice of UID 0. Oct 5 05:44:34 localhost systemd[1]: tmp-crun.bvthrp.mount: Deactivated successfully. Oct 5 05:44:34 localhost podman[278711]: 2025-10-05 09:44:34.184096707 +0000 UTC m=+0.097349270 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:44:34 localhost podman[278711]: 2025-10-05 09:44:34.198042583 +0000 UTC m=+0.111295096 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:44:34 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:44:34 localhost podman[278710]: 2025-10-05 09:44:34.270523312 +0000 UTC m=+0.182805647 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 05:44:34 localhost podman[278710]: 2025-10-05 09:44:34.307412585 +0000 UTC m=+0.219694980 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 05:44:34 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:44:34 localhost python3.9[278847]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:44:35 localhost systemd[1]: tmp-crun.JvlaMF.mount: Deactivated successfully. Oct 5 05:44:35 localhost podman[278957]: 2025-10-05 09:44:35.619029477 +0000 UTC m=+0.092473024 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public) Oct 5 05:44:35 localhost podman[278957]: 2025-10-05 09:44:35.636161712 +0000 UTC m=+0.109605249 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41) Oct 5 05:44:35 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:44:35 localhost python3.9[278958]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:36 localhost python3.9[279089]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20384 DF PROTO=TCP SPT=53798 DPT=9102 SEQ=618278897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76EC8B60000000001030307) Oct 5 05:44:38 localhost python3.9[279201]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.879 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:44:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:44:39 localhost python3.9[279312]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.399 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.399 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.400 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.400 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.400 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:44:39 localhost nova_compute[238014]: 2025-10-05 09:44:39.814 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.059 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.060 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12449MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.061 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.061 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.115 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.115 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.129 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:44:40 localhost python3.9[279445]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.609 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.614 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.627 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.628 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:44:40 localhost nova_compute[238014]: 2025-10-05 09:44:40.628 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:44:40 localhost python3.9[279576]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:44:41 localhost podman[279687]: 2025-10-05 09:44:41.553214328 +0000 UTC m=+0.091201558 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Oct 5 05:44:41 localhost podman[279687]: 2025-10-05 09:44:41.584470705 +0000 UTC m=+0.122457915 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:44:41 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.629 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.630 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.630 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.648 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.649 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.649 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.650 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.650 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:41 localhost nova_compute[238014]: 2025-10-05 09:44:41.650 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:44:41 localhost python3.9[279686]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:42 localhost python3.9[279815]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:42 localhost nova_compute[238014]: 2025-10-05 09:44:42.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:43 localhost python3.9[279925]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:44:43 localhost nova_compute[238014]: 2025-10-05 09:44:43.372 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:44 localhost python3.9[280037]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:44 localhost python3.9[280147]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:44:45 localhost systemd[1]: tmp-crun.PDltUK.mount: Deactivated successfully. Oct 5 05:44:45 localhost podman[280205]: 2025-10-05 09:44:45.249850134 +0000 UTC m=+0.092539236 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:44:45 localhost podman[280205]: 2025-10-05 09:44:45.286122539 +0000 UTC m=+0.128811661 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:44:45 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:44:45 localhost python3.9[280204]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:45 localhost python3.9[280337]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:46 localhost python3.9[280394]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:46 localhost openstack_network_exporter[250246]: ERROR 09:44:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:46 localhost openstack_network_exporter[250246]: ERROR 09:44:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:44:46 localhost openstack_network_exporter[250246]: ERROR 09:44:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:44:46 localhost openstack_network_exporter[250246]: ERROR 09:44:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:44:46 localhost openstack_network_exporter[250246]: Oct 5 05:44:46 localhost openstack_network_exporter[250246]: ERROR 09:44:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:44:46 localhost openstack_network_exporter[250246]: Oct 5 05:44:47 localhost python3.9[280504]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:47 localhost nova_compute[238014]: 2025-10-05 09:44:47.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:44:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:44:47 localhost systemd[1]: tmp-crun.VFPiqw.mount: Deactivated successfully. Oct 5 05:44:47 localhost podman[280615]: 2025-10-05 09:44:47.778319983 +0000 UTC m=+0.098596714 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=multipathd) Oct 5 05:44:47 localhost podman[280615]: 2025-10-05 09:44:47.792576388 +0000 UTC m=+0.112853129 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:44:47 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:44:47 localhost python3.9[280614]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:48 localhost python3.9[280690]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:49 localhost python3.9[280800]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:50 localhost python3.9[280857]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:51 localhost python3.9[280967]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:51 localhost systemd[1]: Reloading. Oct 5 05:44:52 localhost systemd-rc-local-generator[280994]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:44:52 localhost systemd-sysv-generator[280998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:44:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:44:53 localhost podman[281006]: 2025-10-05 09:44:53.625979687 +0000 UTC m=+0.089962105 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:44:53 localhost podman[281006]: 2025-10-05 09:44:53.732292633 +0000 UTC m=+0.196275021 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:44:53 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:44:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:44:53 localhost podman[281047]: 2025-10-05 09:44:53.866463282 +0000 UTC m=+0.090015246 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:44:53 localhost podman[281047]: 2025-10-05 09:44:53.881096077 +0000 UTC m=+0.104648011 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:44:53 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:44:54 localhost python3.9[281158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:54 localhost python3.9[281215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:55 localhost python3.9[281325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:56 localhost podman[248157]: time="2025-10-05T09:44:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:44:56 localhost python3.9[281382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:44:56 localhost podman[248157]: @ - - [05/Oct/2025:09:44:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:44:56 localhost podman[248157]: @ - - [05/Oct/2025:09:44:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17822 "" "Go-http-client/1.1" Oct 5 05:44:56 localhost python3.9[281492]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:44:56 localhost systemd[1]: Reloading. Oct 5 05:44:56 localhost systemd-rc-local-generator[281515]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:44:56 localhost systemd-sysv-generator[281522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:44:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:44:57 localhost systemd[1]: Starting Create netns directory... Oct 5 05:44:57 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Oct 5 05:44:57 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Oct 5 05:44:57 localhost systemd[1]: Finished Create netns directory. Oct 5 05:44:58 localhost python3.9[281644]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:44:58 localhost python3.9[281754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:44:59 localhost python3.9[281811]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:45:00 localhost python3.9[281921]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:45:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4912 writes, 22K keys, 4912 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4912 writes, 673 syncs, 7.30 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:45:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1893 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=103883746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76F22280000000001030307) Oct 5 05:45:01 localhost python3.9[282031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:45:01 localhost python3.9[282088]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.002yh8e_ recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1894 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=103883746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76F26360000000001030307) Oct 5 05:45:02 localhost python3.9[282198]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1895 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=103883746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76F2E360000000001030307) Oct 5 05:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:45:04 localhost podman[282475]: 2025-10-05 09:45:04.606713916 +0000 UTC m=+0.080358108 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:45:04 localhost podman[282475]: 2025-10-05 09:45:04.625467025 +0000 UTC m=+0.099111267 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:45:04 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:45:04 localhost podman[282477]: 2025-10-05 09:45:04.720444078 +0000 UTC m=+0.191950061 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:45:04 localhost podman[282477]: 2025-10-05 09:45:04.762287928 +0000 UTC m=+0.233793871 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:45:04 localhost python3.9[282476]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Oct 5 05:45:04 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:45:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5685 writes, 24K keys, 5685 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5685 writes, 735 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 20 writes, 40 keys, 20 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s#012Interval WAL: 20 writes, 10 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:45:05 localhost python3.9[282627]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:45:05 localhost systemd[1]: tmp-crun.Lw1WZG.mount: Deactivated successfully. Oct 5 05:45:05 localhost podman[282645]: 2025-10-05 09:45:05.932224904 +0000 UTC m=+0.097255847 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:45:05 localhost podman[282645]: 2025-10-05 09:45:05.952120575 +0000 UTC m=+0.117151478 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350) Oct 5 05:45:05 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:45:06 localhost python3.9[282755]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Oct 5 05:45:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1896 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=103883746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76F3DF70000000001030307) Oct 5 05:45:10 localhost python3[282892]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:45:11 localhost python3[282892]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "cfea91e4d3d24ea2b93aee805b1650aeb46d9546bcbf0bc2c512e1c027bd6148",#012 "Digest": "sha256:2e6b33858f10c5161efa5026fe197bed1871f616a88492deb2d9589afe55f306",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:2e6b33858f10c5161efa5026fe197bed1871f616a88492deb2d9589afe55f306"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:10:07.809982095Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249392108,#012 "VirtualSize": 249392108,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/7b5d9698f5e241817bc1ab20fc93517a066d97944c963cb3e8954ea8e4465d09/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/7b5d9698f5e241817bc1ab20fc93517a066d97944c963cb3e8954ea8e4465d09/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:fd97c8266967784f89c19eba886aaa5428c66f61d661ea8625c291c6fd888856"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:05.877369315Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:06.203051718Z",#012 Oct 5 05:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:45:11 localhost podman[283064]: 2025-10-05 09:45:11.855139152 +0000 UTC m=+0.082210709 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:45:11 localhost podman[283064]: 2025-10-05 09:45:11.886006118 +0000 UTC m=+0.113077655 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:45:11 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:45:11 localhost python3.9[283065]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:45:12 localhost python3.9[283194]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:13 localhost python3.9[283249]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:45:14 localhost python3.9[283358]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657513.4013214-2192-31149289977497/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:14 localhost python3.9[283413]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:15 localhost python3.9[283523]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:45:15 localhost podman[283573]: 2025-10-05 09:45:15.919291484 +0000 UTC m=+0.083866985 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:45:15 localhost podman[283573]: 2025-10-05 09:45:15.931172303 +0000 UTC m=+0.095747784 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:45:15 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:45:16 localhost python3.9[283656]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:16 localhost openstack_network_exporter[250246]: ERROR 09:45:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:16 localhost openstack_network_exporter[250246]: ERROR 09:45:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:16 localhost openstack_network_exporter[250246]: ERROR 09:45:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:45:16 localhost openstack_network_exporter[250246]: ERROR 09:45:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:45:16 localhost openstack_network_exporter[250246]: Oct 5 05:45:16 localhost openstack_network_exporter[250246]: ERROR 09:45:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:45:16 localhost openstack_network_exporter[250246]: Oct 5 05:45:17 localhost python3.9[283802]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Oct 5 05:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:45:17 localhost podman[283851]: 2025-10-05 09:45:17.952486646 +0000 UTC m=+0.118522966 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:45:17 localhost podman[283851]: 2025-10-05 09:45:17.964926581 +0000 UTC m=+0.130962881 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:45:17 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:45:18 localhost python3.9[283996]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Oct 5 05:45:19 localhost python3.9[284126]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:45:20 localhost python3.9[284183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:45:20.382 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:45:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:45:20.383 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:45:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:45:20.383 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:45:21 localhost python3.9[284293]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:21 localhost python3.9[284421]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Oct 5 05:45:22 localhost python3.9[284484]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Oct 5 05:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:45:23 localhost systemd[1]: tmp-crun.3fK4IU.mount: Deactivated successfully. Oct 5 05:45:23 localhost podman[284487]: 2025-10-05 09:45:23.925053291 +0000 UTC m=+0.090545111 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller) Oct 5 05:45:24 localhost systemd[1]: tmp-crun.BDUZpR.mount: Deactivated successfully. Oct 5 05:45:24 localhost podman[284502]: 2025-10-05 09:45:24.00943214 +0000 UTC m=+0.079400242 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:45:24 localhost podman[284502]: 2025-10-05 09:45:24.022198643 +0000 UTC m=+0.092166765 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 05:45:24 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:45:24 localhost podman[284487]: 2025-10-05 09:45:24.060479374 +0000 UTC m=+0.225971154 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 05:45:24 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:45:26 localhost podman[248157]: time="2025-10-05T09:45:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:45:26 localhost podman[248157]: @ - - [05/Oct/2025:09:45:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:45:26 localhost podman[248157]: @ - - [05/Oct/2025:09:45:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17816 "" "Go-http-client/1.1" Oct 5 05:45:26 localhost python3.9[284636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Oct 5 05:45:27 localhost python3.9[284750]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:29 localhost python3.9[284860]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:45:29 localhost systemd[1]: Reloading. Oct 5 05:45:29 localhost systemd-rc-local-generator[284884]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:45:29 localhost systemd-sysv-generator[284887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:45:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:45:30 localhost python3.9[285004]: ansible-ansible.builtin.service_facts Invoked Oct 5 05:45:30 localhost network[285021]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Oct 5 05:45:30 localhost network[285022]: 'network-scripts' will be removed from distribution in near future. Oct 5 05:45:30 localhost network[285023]: It is advised to switch to 'NetworkManager' instead for network management. Oct 5 05:45:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12641 DF PROTO=TCP SPT=51644 DPT=9102 SEQ=3776903306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76F97550000000001030307) Oct 5 05:45:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:45:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12642 DF PROTO=TCP SPT=51644 DPT=9102 SEQ=3776903306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76F9B760000000001030307) Oct 5 05:45:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12643 DF PROTO=TCP SPT=51644 DPT=9102 SEQ=3776903306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76FA3760000000001030307) Oct 5 05:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:45:34 localhost podman[285167]: 2025-10-05 09:45:34.93022626 +0000 UTC m=+0.090613192 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:45:34 localhost podman[285167]: 2025-10-05 09:45:34.941084721 +0000 UTC m=+0.101471683 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:45:34 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:45:35 localhost podman[285166]: 2025-10-05 09:45:35.031336602 +0000 UTC m=+0.191086727 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:45:35 localhost podman[285166]: 2025-10-05 09:45:35.042176673 +0000 UTC m=+0.201926808 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:45:35 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:45:35 localhost python3.9[285296]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:45:36 localhost systemd[1]: tmp-crun.bXe4V3.mount: Deactivated successfully. Oct 5 05:45:36 localhost podman[285407]: 2025-10-05 09:45:36.130912949 +0000 UTC m=+0.087994010 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git) Oct 5 05:45:36 localhost podman[285407]: 2025-10-05 09:45:36.144706221 +0000 UTC m=+0.101787252 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, architecture=x86_64) Oct 5 05:45:36 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:45:36 localhost python3.9[285408]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:37 localhost python3.9[285538]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:37 localhost python3.9[285649]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12644 DF PROTO=TCP SPT=51644 DPT=9102 SEQ=3776903306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC76FB3370000000001030307) Oct 5 05:45:38 localhost python3.9[285760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:39 localhost python3.9[285871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:40 localhost python3.9[285982]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.378 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.399 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.400 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.401 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.401 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.401 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:45:41 localhost nova_compute[238014]: 2025-10-05 09:45:41.857 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:45:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.046 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.048 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12441MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.048 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.049 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:45:42 localhost podman[286116]: 2025-10-05 09:45:42.058289241 +0000 UTC m=+0.092133674 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 05:45:42 localhost podman[286116]: 2025-10-05 09:45:42.088131208 +0000 UTC m=+0.121975581 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Oct 5 05:45:42 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.101 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.101 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.132 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:45:42 localhost python3.9[286115]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.663 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.669 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.687 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.689 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:45:42 localhost nova_compute[238014]: 2025-10-05 09:45:42.690 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.690 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.691 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.691 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.691 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.712 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.713 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.714 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:43 localhost nova_compute[238014]: 2025-10-05 09:45:43.714 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:46 localhost openstack_network_exporter[250246]: ERROR 09:45:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:46 localhost openstack_network_exporter[250246]: ERROR 09:45:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:45:46 localhost openstack_network_exporter[250246]: ERROR 09:45:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:45:46 localhost openstack_network_exporter[250246]: ERROR 09:45:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:45:46 localhost openstack_network_exporter[250246]: Oct 5 05:45:46 localhost openstack_network_exporter[250246]: ERROR 09:45:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:45:46 localhost openstack_network_exporter[250246]: Oct 5 05:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:45:46 localhost systemd[1]: tmp-crun.x8P5YS.mount: Deactivated successfully. Oct 5 05:45:46 localhost podman[286266]: 2025-10-05 09:45:46.852398814 +0000 UTC m=+0.078933569 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:45:46 localhost podman[286266]: 2025-10-05 09:45:46.863213274 +0000 UTC m=+0.089748019 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:45:46 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:45:46 localhost python3.9[286265]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:47 localhost python3.9[286398]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:45:48 localhost systemd[1]: tmp-crun.FP4li6.mount: Deactivated successfully. Oct 5 05:45:48 localhost podman[286509]: 2025-10-05 09:45:48.182208871 +0000 UTC m=+0.086866948 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:45:48 localhost podman[286509]: 2025-10-05 09:45:48.195891871 +0000 UTC m=+0.100549938 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 05:45:48 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:45:48 localhost python3.9[286508]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:48 localhost nova_compute[238014]: 2025-10-05 09:45:48.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:48 localhost python3.9[286638]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:49 localhost python3.9[286748]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:50 localhost python3.9[286858]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:50 localhost python3.9[286968]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:51 localhost python3.9[287078]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:51 localhost nova_compute[238014]: 2025-10-05 09:45:51.372 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:45:52 localhost python3.9[287188]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:52 localhost python3.9[287298]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:53 localhost python3.9[287408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:53 localhost python3.9[287518]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:45:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:45:54 localhost podman[287629]: 2025-10-05 09:45:54.485959855 +0000 UTC m=+0.084888493 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:45:54 localhost podman[287629]: 2025-10-05 09:45:54.494693717 +0000 UTC m=+0.093622385 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:45:54 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:45:54 localhost podman[287630]: 2025-10-05 09:45:54.542379149 +0000 UTC m=+0.131172767 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:45:54 localhost python3.9[287628]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:54 localhost podman[287630]: 2025-10-05 09:45:54.635433507 +0000 UTC m=+0.224227075 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:45:54 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:45:55 localhost python3.9[287782]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:55 localhost python3.9[287892]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:56 localhost podman[248157]: time="2025-10-05T09:45:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:45:56 localhost podman[248157]: @ - - [05/Oct/2025:09:45:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:45:56 localhost podman[248157]: @ - - [05/Oct/2025:09:45:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17820 "" "Go-http-client/1.1" Oct 5 05:45:56 localhost python3.9[288002]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:45:57 localhost python3.9[288112]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:45:58 localhost python3.9[288222]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Oct 5 05:45:59 localhost python3.9[288332]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Oct 5 05:45:59 localhost systemd[1]: Reloading. Oct 5 05:45:59 localhost systemd-sysv-generator[288362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:45:59 localhost systemd-rc-local-generator[288359]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:45:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:46:00 localhost python3.9[288478]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:00 localhost python3.9[288589]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20121 DF PROTO=TCP SPT=35550 DPT=9102 SEQ=3745389554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7700C850000000001030307) Oct 5 05:46:01 localhost python3.9[288700]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20122 DF PROTO=TCP SPT=35550 DPT=9102 SEQ=3745389554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77010770000000001030307) Oct 5 05:46:02 localhost python3.9[288811]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:03 localhost python3.9[288922]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:03 localhost python3.9[289033]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20123 DF PROTO=TCP SPT=35550 DPT=9102 SEQ=3745389554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77018760000000001030307) Oct 5 05:46:04 localhost python3.9[289144]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:04 localhost python3.9[289255]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:46:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:46:05 localhost podman[289257]: 2025-10-05 09:46:05.107050437 +0000 UTC m=+0.099300312 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:46:05 localhost podman[289257]: 2025-10-05 09:46:05.141830082 +0000 UTC m=+0.134079957 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:46:05 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:46:05 localhost podman[289292]: 2025-10-05 09:46:05.197346341 +0000 UTC m=+0.081192172 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:46:05 localhost podman[289292]: 2025-10-05 09:46:05.236964009 +0000 UTC m=+0.120809870 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:46:05 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:46:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:46:06 localhost systemd[1]: tmp-crun.r952f0.mount: Deactivated successfully. Oct 5 05:46:06 localhost podman[289316]: 2025-10-05 09:46:06.927203143 +0000 UTC m=+0.091077474 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, io.openshift.expose-services=) Oct 5 05:46:06 localhost podman[289316]: 2025-10-05 09:46:06.942182968 +0000 UTC m=+0.106057329 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, version=9.6) Oct 5 05:46:06 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:46:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20124 DF PROTO=TCP SPT=35550 DPT=9102 SEQ=3745389554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77028360000000001030307) Oct 5 05:46:08 localhost python3.9[289428]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:08 localhost python3.9[289538]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:09 localhost python3.9[289648]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:10 localhost python3.9[289758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:10 localhost python3.9[289868]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:11 localhost python3.9[289978]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:12 localhost python3.9[290088]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:46:12 localhost systemd[1]: tmp-crun.a8ppNC.mount: Deactivated successfully. Oct 5 05:46:12 localhost podman[290160]: 2025-10-05 09:46:12.927025794 +0000 UTC m=+0.089298046 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Oct 5 05:46:12 localhost podman[290160]: 2025-10-05 09:46:12.931319592 +0000 UTC m=+0.093591854 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent) Oct 5 05:46:12 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:46:13 localhost python3.9[290216]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:14 localhost python3.9[290326]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:14 localhost python3.9[290436]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:16 localhost python3.9[290546]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:16 localhost openstack_network_exporter[250246]: ERROR 09:46:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:46:16 localhost openstack_network_exporter[250246]: ERROR 09:46:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:16 localhost openstack_network_exporter[250246]: ERROR 09:46:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:16 localhost openstack_network_exporter[250246]: ERROR 09:46:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:46:16 localhost openstack_network_exporter[250246]: Oct 5 05:46:16 localhost openstack_network_exporter[250246]: ERROR 09:46:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:46:16 localhost openstack_network_exporter[250246]: Oct 5 05:46:16 localhost python3.9[290656]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:46:17 localhost podman[290674]: 2025-10-05 09:46:17.921059207 +0000 UTC m=+0.086257032 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:46:17 localhost podman[290674]: 2025-10-05 09:46:17.93203964 +0000 UTC m=+0.097237445 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:46:17 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:46:18 localhost podman[290697]: 2025-10-05 09:46:18.9144861 +0000 UTC m=+0.081607173 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:46:18 localhost podman[290697]: 2025-10-05 09:46:18.951123255 +0000 UTC m=+0.118244288 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2) Oct 5 05:46:18 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:46:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:46:20.382 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:46:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:46:20.383 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:46:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:46:20.383 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:46:22 localhost python3.9[290863]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Oct 5 05:46:23 localhost podman[290972]: Oct 5 05:46:23 localhost podman[290972]: 2025-10-05 09:46:23.460344113 +0000 UTC m=+0.078144827 container create ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_taussig, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:46:23 localhost systemd[1]: Started libpod-conmon-ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a.scope. Oct 5 05:46:23 localhost systemd[1]: Started libcrun container. Oct 5 05:46:23 localhost podman[290972]: 2025-10-05 09:46:23.429652972 +0000 UTC m=+0.047453716 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:46:23 localhost podman[290972]: 2025-10-05 09:46:23.540922496 +0000 UTC m=+0.158723220 container init ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_taussig, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., release=553, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.buildah.version=1.33.12) Oct 5 05:46:23 localhost podman[290972]: 2025-10-05 09:46:23.551384765 +0000 UTC m=+0.169185479 container start ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_taussig, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, version=7, name=rhceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Oct 5 05:46:23 localhost podman[290972]: 2025-10-05 09:46:23.551832869 +0000 UTC m=+0.169633603 container attach ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_taussig, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Oct 5 05:46:23 localhost crazy_taussig[290987]: 167 167 Oct 5 05:46:23 localhost systemd[1]: libpod-ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a.scope: Deactivated successfully. Oct 5 05:46:23 localhost podman[290972]: 2025-10-05 09:46:23.558124753 +0000 UTC m=+0.175925467 container died ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_taussig, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:46:23 localhost podman[290992]: 2025-10-05 09:46:23.668008318 +0000 UTC m=+0.095998411 container remove ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_taussig, release=553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Oct 5 05:46:23 localhost systemd[1]: libpod-conmon-ecb18e7edc107639c594f7d3e3b91a3df9c2e68a5038edbb493b057048f07a9a.scope: Deactivated successfully. Oct 5 05:46:23 localhost sshd[291015]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:46:23 localhost podman[291013]: Oct 5 05:46:23 localhost podman[291013]: 2025-10-05 09:46:23.873469162 +0000 UTC m=+0.077040346 container create 16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_boyd, name=rhceph, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Oct 5 05:46:23 localhost systemd[1]: Started libpod-conmon-16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821.scope. Oct 5 05:46:23 localhost systemd[1]: Started libcrun container. Oct 5 05:46:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6653d6b61c269110e48e7c97c16537b2e3a7291a0376a8f122ace9d7e30dd0c9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6653d6b61c269110e48e7c97c16537b2e3a7291a0376a8f122ace9d7e30dd0c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6653d6b61c269110e48e7c97c16537b2e3a7291a0376a8f122ace9d7e30dd0c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6653d6b61c269110e48e7c97c16537b2e3a7291a0376a8f122ace9d7e30dd0c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:23 localhost podman[291013]: 2025-10-05 09:46:23.845304622 +0000 UTC m=+0.048875546 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:46:23 localhost podman[291013]: 2025-10-05 09:46:23.947730201 +0000 UTC m=+0.151301135 container init 16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_boyd, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public) Oct 5 05:46:23 localhost systemd[1]: Started Session 64 of User zuul. Oct 5 05:46:23 localhost systemd-logind[760]: New session 64 of user zuul. Oct 5 05:46:23 localhost podman[291013]: 2025-10-05 09:46:23.96391407 +0000 UTC m=+0.167485014 container start 16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_boyd, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , GIT_BRANCH=main) Oct 5 05:46:23 localhost podman[291013]: 2025-10-05 09:46:23.964181407 +0000 UTC m=+0.167752421 container attach 16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_boyd, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, name=rhceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:46:24 localhost systemd[1]: session-64.scope: Deactivated successfully. Oct 5 05:46:24 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Oct 5 05:46:24 localhost systemd-logind[760]: Removed session 64. Oct 5 05:46:24 localhost systemd[1]: var-lib-containers-storage-overlay-719fc9f91d36e013bb871249063294097d50c7881456df80a54643b16013ae74-merged.mount: Deactivated successfully. Oct 5 05:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:46:24 localhost podman[292491]: 2025-10-05 09:46:24.935082016 +0000 UTC m=+0.092079663 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:46:24 localhost python3.9[292184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:24 localhost podman[292491]: 2025-10-05 09:46:24.971221077 +0000 UTC m=+0.128218724 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:46:24 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:46:25 localhost podman[292501]: 2025-10-05 09:46:24.974682213 +0000 UTC m=+0.131511826 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 05:46:25 localhost podman[292501]: 2025-10-05 09:46:25.053060895 +0000 UTC m=+0.209890528 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:46:25 localhost sad_boyd[291031]: [ Oct 5 05:46:25 localhost sad_boyd[291031]: { Oct 5 05:46:25 localhost sad_boyd[291031]: "available": false, Oct 5 05:46:25 localhost sad_boyd[291031]: "ceph_device": false, Oct 5 05:46:25 localhost sad_boyd[291031]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 05:46:25 localhost sad_boyd[291031]: "lsm_data": {}, Oct 5 05:46:25 localhost sad_boyd[291031]: "lvs": [], Oct 5 05:46:25 localhost sad_boyd[291031]: "path": "/dev/sr0", Oct 5 05:46:25 localhost sad_boyd[291031]: "rejected_reasons": [ Oct 5 05:46:25 localhost sad_boyd[291031]: "Insufficient space (<5GB)", Oct 5 05:46:25 localhost sad_boyd[291031]: "Has a FileSystem" Oct 5 05:46:25 localhost sad_boyd[291031]: ], Oct 5 05:46:25 localhost sad_boyd[291031]: "sys_api": { Oct 5 05:46:25 localhost sad_boyd[291031]: "actuators": null, Oct 5 05:46:25 localhost sad_boyd[291031]: "device_nodes": "sr0", Oct 5 05:46:25 localhost sad_boyd[291031]: "human_readable_size": "482.00 KB", Oct 5 05:46:25 localhost sad_boyd[291031]: "id_bus": "ata", Oct 5 05:46:25 localhost sad_boyd[291031]: "model": "QEMU DVD-ROM", Oct 5 05:46:25 localhost sad_boyd[291031]: "nr_requests": "2", Oct 5 05:46:25 localhost sad_boyd[291031]: "partitions": {}, Oct 5 05:46:25 localhost sad_boyd[291031]: "path": "/dev/sr0", Oct 5 05:46:25 localhost sad_boyd[291031]: "removable": "1", Oct 5 05:46:25 localhost sad_boyd[291031]: "rev": "2.5+", Oct 5 05:46:25 localhost sad_boyd[291031]: "ro": "0", Oct 5 05:46:25 localhost sad_boyd[291031]: "rotational": "1", Oct 5 05:46:25 localhost sad_boyd[291031]: "sas_address": "", Oct 5 05:46:25 localhost sad_boyd[291031]: "sas_device_handle": "", Oct 5 05:46:25 localhost sad_boyd[291031]: "scheduler_mode": "mq-deadline", Oct 5 05:46:25 localhost sad_boyd[291031]: "sectors": 0, Oct 5 05:46:25 localhost sad_boyd[291031]: "sectorsize": "2048", Oct 5 05:46:25 localhost sad_boyd[291031]: "size": 493568.0, Oct 5 05:46:25 localhost sad_boyd[291031]: "support_discard": "0", Oct 5 05:46:25 localhost sad_boyd[291031]: "type": "disk", Oct 5 05:46:25 localhost sad_boyd[291031]: "vendor": "QEMU" Oct 5 05:46:25 localhost sad_boyd[291031]: } Oct 5 05:46:25 localhost sad_boyd[291031]: } Oct 5 05:46:25 localhost sad_boyd[291031]: ] Oct 5 05:46:25 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:46:25 localhost systemd[1]: libpod-16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821.scope: Deactivated successfully. Oct 5 05:46:25 localhost podman[291013]: 2025-10-05 09:46:25.085062523 +0000 UTC m=+1.288633437 container died 16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_boyd, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7) Oct 5 05:46:25 localhost systemd[1]: libpod-16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821.scope: Consumed 1.078s CPU time. Oct 5 05:46:25 localhost systemd[1]: var-lib-containers-storage-overlay-6653d6b61c269110e48e7c97c16537b2e3a7291a0376a8f122ace9d7e30dd0c9-merged.mount: Deactivated successfully. Oct 5 05:46:25 localhost podman[293274]: 2025-10-05 09:46:25.166201232 +0000 UTC m=+0.070105844 container remove 16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_boyd, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True) Oct 5 05:46:25 localhost systemd[1]: libpod-conmon-16ebcc3c6511e6c7a61d470ff067dd8c5b639ce6ec1d3711f07f375834a95821.scope: Deactivated successfully. Oct 5 05:46:25 localhost python3.9[293390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657584.4891195-3919-251154819486012/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:26 localhost podman[248157]: time="2025-10-05T09:46:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:46:26 localhost podman[248157]: @ - - [05/Oct/2025:09:46:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:46:26 localhost podman[248157]: @ - - [05/Oct/2025:09:46:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17826 "" "Go-http-client/1.1" Oct 5 05:46:26 localhost python3.9[293499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:26 localhost python3.9[293554]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:27 localhost python3.9[293662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:28 localhost python3.9[293748]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657587.1089275-3919-243142970520695/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:28 localhost python3.9[293856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:29 localhost python3.9[293942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657588.2595332-3919-170938843302580/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=be143462936c4f6b37574d8a4ad49679def80d15 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:29 localhost python3.9[294050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:30 localhost python3.9[294136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759657589.5463443-3919-153662452222358/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42892 DF PROTO=TCP SPT=52434 DPT=9102 SEQ=3472783247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77081B50000000001030307) Oct 5 05:46:31 localhost python3.9[294246]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:31 localhost python3.9[294356]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42893 DF PROTO=TCP SPT=52434 DPT=9102 SEQ=3472783247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77085B60000000001030307) Oct 5 05:46:32 localhost python3.9[294466]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:33 localhost python3.9[294578]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42894 DF PROTO=TCP SPT=52434 DPT=9102 SEQ=3472783247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7708DB60000000001030307) Oct 5 05:46:34 localhost python3.9[294686]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:35 localhost python3.9[294796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:46:35 localhost python3.9[294851]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:35 localhost systemd[1]: tmp-crun.RPuvi4.mount: Deactivated successfully. Oct 5 05:46:35 localhost podman[294852]: 2025-10-05 09:46:35.994267042 +0000 UTC m=+0.162710570 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:46:36 localhost podman[294852]: 2025-10-05 09:46:36.003488737 +0000 UTC m=+0.171932255 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Oct 5 05:46:36 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:46:36 localhost podman[294853]: 2025-10-05 09:46:35.943476165 +0000 UTC m=+0.113172718 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:46:36 localhost podman[294853]: 2025-10-05 09:46:36.0812038 +0000 UTC m=+0.250900333 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:46:36 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:46:37 localhost python3.9[295000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True Oct 5 05:46:37 localhost python3.9[295055]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Oct 5 05:46:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:46:37 localhost podman[295073]: 2025-10-05 09:46:37.915417905 +0000 UTC m=+0.079201320 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=) Oct 5 05:46:37 localhost podman[295073]: 2025-10-05 09:46:37.953496553 +0000 UTC m=+0.117279988 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container) Oct 5 05:46:37 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:46:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42895 DF PROTO=TCP SPT=52434 DPT=9102 SEQ=3472783247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7709D770000000001030307) Oct 5 05:46:38 localhost python3.9[295186]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:46:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:46:39 localhost python3.9[295296]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:46:40 localhost python3[295406]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:46:40 localhost python3[295406]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0d460c957a79c0fa941447cb00e5ab934f0ccc1442862d4e417ff427bd26aed9",#012 "Digest": "sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:32:21.432647731Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207527293,#012 "VirtualSize": 1207527293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:6a39f36d67f67acbd99daa43f5f54c2ceabda19dd25b824285c9338b74a7494e",#012 "sha256:9a26e1dd0ae990be1ae7a87aaaac389265f77f7100ea3ac633d95d89956449a4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 5 05:46:41 localhost python3.9[295578]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:42 localhost nova_compute[238014]: 2025-10-05 09:46:42.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:42 localhost nova_compute[238014]: 2025-10-05 09:46:42.378 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:46:42 localhost python3.9[295690]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.373 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.376 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.402 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.402 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.402 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.403 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.403 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:46:43 localhost systemd[1]: tmp-crun.Kda2h1.mount: Deactivated successfully. Oct 5 05:46:43 localhost podman[295800]: 2025-10-05 09:46:43.488565865 +0000 UTC m=+0.097567026 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:46:43 localhost podman[295800]: 2025-10-05 09:46:43.522250865 +0000 UTC m=+0.131251926 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001) Oct 5 05:46:43 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:46:43 localhost python3.9[295801]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Oct 5 05:46:43 localhost nova_compute[238014]: 2025-10-05 09:46:43.835 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.038 2 WARNING nova.virt.libvirt.driver [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.040 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12448MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.040 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.041 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.110 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.110 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.135 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:46:44 localhost python3[295970]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.633 2 DEBUG oslo_concurrency.processutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.639 2 DEBUG nova.compute.provider_tree [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.657 2 DEBUG nova.scheduler.client.report [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.660 2 DEBUG nova.compute.resource_tracker [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:46:44 localhost nova_compute[238014]: 2025-10-05 09:46:44.661 2 DEBUG oslo_concurrency.lockutils [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:46:44 localhost python3[295970]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "0d460c957a79c0fa941447cb00e5ab934f0ccc1442862d4e417ff427bd26aed9",#012 "Digest": "sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:fe858189991614ceec520ae642d69c7272d227c619869aa1246f3864b99002d9"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-10-05T06:32:21.432647731Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1207527293,#012 "VirtualSize": 1207527293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/51990b260222d7db8984d41725e43ec764412732ca6d2e45b5e506bb45ebdc98/diff:/var/lib/containers/storage/overlay/99798cddfa9923cc331acab6c10704bd803be0a6e6ccb2c284a0cb9fb13f6e39/diff:/var/lib/containers/storage/overlay/30b6713bec4042d20977a7e76706b7fba00a8731076cb5a6bb592fbc59ae4cc2/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d45d3a2e0b4fceb324d00389025b85a79ce81c90161b7badb50571ac56c1fbb7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012 "sha256:0401503ff2c81110ce9d76f6eb97b9692080164bee7fb0b8bb5c17469b18b8d2",#012 "sha256:1fc8d38a33e99522a1f9a7801d867429b8d441d43df8c37b8b3edbd82330b79a",#012 "sha256:6a39f36d67f67acbd99daa43f5f54c2ceabda19dd25b824285c9338b74a7494e",#012 "sha256:9a26e1dd0ae990be1ae7a87aaaac389265f77f7100ea3ac633d95d89956449a4"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251001",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "88dc57612f447daadb492dcf3ad854ac",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-10-01T03:48:01.636308726Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:01.636415187Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251001\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-01T03:48:09.404099909Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-10-05T06:08:27.442907082Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442948673Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442975414Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.442996675Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443019515Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.443038026Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:08:27.812870525Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-10-05T06:09:01.704420807Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Oct 5 05:46:45 localhost nova_compute[238014]: 2025-10-05 09:46:45.661 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:45 localhost nova_compute[238014]: 2025-10-05 09:46:45.662 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:46:45 localhost nova_compute[238014]: 2025-10-05 09:46:45.662 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:46:45 localhost nova_compute[238014]: 2025-10-05 09:46:45.685 2 DEBUG nova.compute.manager [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:46:45 localhost nova_compute[238014]: 2025-10-05 09:46:45.686 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:45 localhost nova_compute[238014]: 2025-10-05 09:46:45.687 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:45 localhost python3.9[296143]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:46 localhost openstack_network_exporter[250246]: ERROR 09:46:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:46 localhost openstack_network_exporter[250246]: ERROR 09:46:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:46:46 localhost openstack_network_exporter[250246]: ERROR 09:46:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:46:46 localhost openstack_network_exporter[250246]: ERROR 09:46:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:46:46 localhost openstack_network_exporter[250246]: Oct 5 05:46:46 localhost openstack_network_exporter[250246]: ERROR 09:46:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:46:46 localhost openstack_network_exporter[250246]: Oct 5 05:46:47 localhost python3.9[296255]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:47 localhost python3.9[296364]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759657607.1421163-4554-65176301642228/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:46:48 localhost podman[296420]: 2025-10-05 09:46:48.306390613 +0000 UTC m=+0.093024954 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:46:48 localhost podman[296420]: 2025-10-05 09:46:48.31482177 +0000 UTC m=+0.101456091 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:46:48 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:46:48 localhost python3.9[296419]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Oct 5 05:46:49 localhost nova_compute[238014]: 2025-10-05 09:46:49.377 2 DEBUG oslo_service.periodic_task [None req-bd860c23-3198-4dfd-b3db-1708db962720 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:46:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:46:49 localhost podman[296460]: 2025-10-05 09:46:49.91817876 +0000 UTC m=+0.081905133 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:46:49 localhost podman[296460]: 2025-10-05 09:46:49.954508291 +0000 UTC m=+0.118234654 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 05:46:49 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:46:50 localhost python3.9[296570]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:51 localhost python3.9[296678]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:52 localhost python3.9[296786]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Oct 5 05:46:53 localhost python3.9[296896]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:46:53 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 106.3 (354 of 333 items), suggesting rotation. Oct 5 05:46:53 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:46:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:46:53 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:46:54 localhost python3.9[297030]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:46:54 localhost systemd[1]: Stopping nova_compute container... Oct 5 05:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:46:55 localhost podman[297047]: 2025-10-05 09:46:55.145948972 +0000 UTC m=+0.084134634 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:55 localhost podman[297047]: 2025-10-05 09:46:55.184136563 +0000 UTC m=+0.122322175 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:55 localhost podman[297056]: 2025-10-05 09:46:55.184120582 +0000 UTC m=+0.084788521 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:46:55 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:46:55 localhost podman[297056]: 2025-10-05 09:46:55.252066887 +0000 UTC m=+0.152734816 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:55 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:46:55 localhost nova_compute[238014]: 2025-10-05 09:46:55.591 2 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Oct 5 05:46:55 localhost nova_compute[238014]: 2025-10-05 09:46:55.593 2 DEBUG oslo_concurrency.lockutils [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:46:55 localhost nova_compute[238014]: 2025-10-05 09:46:55.594 2 DEBUG oslo_concurrency.lockutils [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:46:55 localhost nova_compute[238014]: 2025-10-05 09:46:55.594 2 DEBUG oslo_concurrency.lockutils [None req-dc1000d4-f6c3-478c-9147-03f1f491b4c1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:46:55 localhost journal[237275]: End of file while reading data: Input/output error Oct 5 05:46:55 localhost systemd[1]: libpod-c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5.scope: Deactivated successfully. Oct 5 05:46:55 localhost systemd[1]: libpod-c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5.scope: Consumed 18.897s CPU time. Oct 5 05:46:55 localhost podman[297034]: 2025-10-05 09:46:55.951020018 +0000 UTC m=+1.450644336 container died c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:46:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5-userdata-shm.mount: Deactivated successfully. Oct 5 05:46:55 localhost systemd[1]: var-lib-containers-storage-overlay-625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a-merged.mount: Deactivated successfully. Oct 5 05:46:56 localhost podman[248157]: time="2025-10-05T09:46:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:46:56 localhost podman[297034]: 2025-10-05 09:46:56.101809521 +0000 UTC m=+1.601433779 container cleanup c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:46:56 localhost podman[297034]: nova_compute Oct 5 05:46:56 localhost podman[297091]: 2025-10-05 09:46:56.104635487 +0000 UTC m=+0.148800110 container cleanup c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Oct 5 05:46:56 localhost podman[248157]: @ - - [05/Oct/2025:09:46:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139972 "" "Go-http-client/1.1" Oct 5 05:46:56 localhost podman[248157]: @ - - [05/Oct/2025:09:46:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17696 "" "Go-http-client/1.1" Oct 5 05:46:56 localhost podman[297104]: 2025-10-05 09:46:56.195292636 +0000 UTC m=+0.064148394 container cleanup c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=nova_compute) Oct 5 05:46:56 localhost podman[297104]: nova_compute Oct 5 05:46:56 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Oct 5 05:46:56 localhost systemd[1]: Stopped nova_compute container. Oct 5 05:46:56 localhost systemd[1]: Starting nova_compute container... Oct 5 05:46:56 localhost systemd[1]: Started libcrun container. Oct 5 05:46:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625d3dab6cde344c4c793816c9c1778588d3d69b142a4832f571ffb84a48ea8a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:56 localhost podman[297115]: 2025-10-05 09:46:56.353642103 +0000 UTC m=+0.125434099 container init c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:46:56 localhost podman[297115]: 2025-10-05 09:46:56.362866882 +0000 UTC m=+0.134658878 container start c9a8b80566caf17988e6e8ec0ab563082c7102ec8ae76166033843dcf59fa4f5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:46:56 localhost podman[297115]: nova_compute Oct 5 05:46:56 localhost nova_compute[297130]: + sudo -E kolla_set_configs Oct 5 05:46:56 localhost systemd[1]: Started nova_compute container. Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Validating config file Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying service configuration files Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/nova/nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/nova/nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /etc/ceph Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Creating directory /etc/ceph Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/ceph Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Writing out command to execute Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:56 localhost nova_compute[297130]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Oct 5 05:46:56 localhost nova_compute[297130]: ++ cat /run_command Oct 5 05:46:56 localhost nova_compute[297130]: + CMD=nova-compute Oct 5 05:46:56 localhost nova_compute[297130]: + ARGS= Oct 5 05:46:56 localhost nova_compute[297130]: + sudo kolla_copy_cacerts Oct 5 05:46:56 localhost nova_compute[297130]: + [[ ! -n '' ]] Oct 5 05:46:56 localhost nova_compute[297130]: + . kolla_extend_start Oct 5 05:46:56 localhost nova_compute[297130]: + echo 'Running command: '\''nova-compute'\''' Oct 5 05:46:56 localhost nova_compute[297130]: Running command: 'nova-compute' Oct 5 05:46:56 localhost nova_compute[297130]: + umask 0022 Oct 5 05:46:56 localhost nova_compute[297130]: + exec nova-compute Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.119 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.120 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.120 2 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.120 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.241 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.264 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:46:58 localhost python3.9[297254]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Oct 5 05:46:58 localhost systemd[1]: Started libpod-conmon-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7.scope. Oct 5 05:46:58 localhost systemd[1]: Started libcrun container. Oct 5 05:46:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Oct 5 05:46:58 localhost podman[297281]: 2025-10-05 09:46:58.723809786 +0000 UTC m=+0.112929392 container init 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:46:58 localhost podman[297281]: 2025-10-05 09:46:58.733637221 +0000 UTC m=+0.122756847 container start 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Oct 5 05:46:58 localhost python3.9[297254]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Applying nova statedir ownership Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928 Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7bff446e28da7b7609613334d4f266c2377bdec4e8e9a595eeb621178e5df9fb Oct 5 05:46:58 localhost nova_compute_init[297300]: INFO:nova_statedir:Nova statedir ownership complete Oct 5 05:46:58 localhost systemd[1]: libpod-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7.scope: Deactivated successfully. Oct 5 05:46:58 localhost podman[297301]: 2025-10-05 09:46:58.807938758 +0000 UTC m=+0.053749863 container died 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:46:58 localhost podman[297315]: 2025-10-05 09:46:58.943262543 +0000 UTC m=+0.127161855 container cleanup 472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:46:58 localhost systemd[1]: libpod-conmon-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7.scope: Deactivated successfully. Oct 5 05:46:58 localhost nova_compute[297130]: 2025-10-05 09:46:58.950 2 INFO nova.virt.driver [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.058 2 INFO nova.compute.provider_config [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.074 2 DEBUG oslo_concurrency.lockutils [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.075 2 DEBUG oslo_concurrency.lockutils [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.075 2 DEBUG oslo_concurrency.lockutils [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.076 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.076 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.076 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.077 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.077 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.077 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.078 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.078 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.078 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.079 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.079 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.079 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.079 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.080 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.080 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.080 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.081 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.081 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.081 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.082 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] console_host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.082 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.082 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.082 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.083 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.083 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.083 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.084 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.084 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.084 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.085 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.085 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.085 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.086 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.086 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.086 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.086 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.087 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.087 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.087 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] host = np0005471152.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.088 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.088 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.088 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.089 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.089 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.089 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.090 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.090 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.090 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.091 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.091 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.091 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.092 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.092 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.092 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.093 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.093 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.093 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.094 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.094 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.094 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.094 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.095 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.095 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.095 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.095 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.096 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.096 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.096 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.097 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.097 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.097 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.097 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.098 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.098 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.098 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.099 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.099 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.099 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.100 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.100 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.100 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.100 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.101 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.101 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.101 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.102 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.102 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.102 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.102 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.103 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.103 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.103 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.104 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.104 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.104 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.104 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.105 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.105 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.105 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.106 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.106 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.106 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.107 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.107 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.107 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.107 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.108 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.108 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.108 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.109 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.109 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.109 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.110 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.110 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.110 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.110 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.111 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.111 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.111 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.112 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.112 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.112 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.112 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.113 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.113 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.113 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.114 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.114 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.114 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.115 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.115 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.115 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.115 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.116 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.116 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.116 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.117 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.117 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.117 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.118 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.118 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.118 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.118 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.119 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.119 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.119 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.120 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.120 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.120 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.121 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.121 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.121 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.121 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.122 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.122 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.122 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.123 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.123 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.123 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.123 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.123 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.124 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.124 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.124 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.124 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.124 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.124 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.125 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.125 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.125 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.125 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.125 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.126 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.126 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.126 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.126 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.126 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.126 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.127 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.127 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.127 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.127 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.127 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.128 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.128 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.128 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.128 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.128 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.129 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.129 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.129 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.129 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.129 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.129 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.130 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.130 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.130 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.130 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.130 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.131 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.131 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.131 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.131 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.131 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.131 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.132 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.132 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.132 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.132 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.132 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.132 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.133 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.133 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.133 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.133 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.133 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.134 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.134 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.134 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.134 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.134 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.134 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.135 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.135 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.135 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.135 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.135 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.136 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.136 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.136 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.136 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.136 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.136 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.137 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.137 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.137 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.137 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.137 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.138 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.138 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.138 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.138 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.138 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.139 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.139 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.139 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.139 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.139 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.140 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.140 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.140 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.140 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.140 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.141 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.141 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.141 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.141 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.141 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.142 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.142 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.142 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.142 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.143 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.143 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.143 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.143 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.143 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.144 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.144 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.144 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.144 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.144 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.145 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.145 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.145 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.145 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.145 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.146 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.146 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.146 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.146 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.146 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.147 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.147 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.147 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.147 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.147 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.148 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.148 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.148 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.148 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.148 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.149 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.149 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.149 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.149 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.150 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.150 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.150 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.150 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.150 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.150 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.151 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.151 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.151 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.151 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.151 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.152 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.152 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.152 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.152 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.152 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.152 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.153 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.153 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.153 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.153 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.153 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.154 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.154 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.154 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.154 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.154 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.154 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.155 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.155 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.155 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.155 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.155 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.156 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.156 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.156 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.156 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.156 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.156 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.157 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.157 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.157 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.157 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.157 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.158 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.158 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.158 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.158 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.158 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.159 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.160 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.160 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.160 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.160 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.160 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.160 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.161 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.162 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.163 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.164 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.165 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.166 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.167 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.168 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.169 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.170 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.171 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.172 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.173 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.174 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.175 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.176 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.177 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.178 2 WARNING oslo_config.cfg [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Oct 5 05:46:59 localhost nova_compute[297130]: live_migration_uri is deprecated for removal in favor of two other options that Oct 5 05:46:59 localhost nova_compute[297130]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Oct 5 05:46:59 localhost nova_compute[297130]: and ``live_migration_inbound_addr`` respectively. Oct 5 05:46:59 localhost nova_compute[297130]: ). Its value may be silently ignored in the future.#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.179 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.180 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rbd_secret_uuid = 659062ac-50b4-5607-b699-3105da7f55ee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.181 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.182 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.183 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.184 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.185 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.186 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.187 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.188 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.189 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.190 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.190 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.190 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.190 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.190 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.191 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.191 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.191 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.191 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.191 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.192 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.192 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.192 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.192 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.192 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.192 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.193 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.193 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.193 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.193 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.193 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.194 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.194 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.194 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.194 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.194 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.194 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.195 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.195 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.195 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.195 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.195 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.196 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.196 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.196 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.196 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.196 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.196 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.197 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.197 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.197 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.197 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.197 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.197 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.198 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.198 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.198 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.198 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.198 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.199 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.199 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.199 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.199 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.199 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.199 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.200 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.200 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.200 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.200 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.200 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.201 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.201 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.201 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.201 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.201 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.202 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.202 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.202 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.202 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.202 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.202 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.203 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.203 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.203 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.203 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.203 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.204 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.204 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.204 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.204 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.204 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.205 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.205 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.205 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.205 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.205 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.205 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.206 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.206 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.206 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.206 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.206 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.207 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.207 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.207 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.207 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.207 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.207 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.208 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.208 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.208 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.208 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.208 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.209 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.209 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.209 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.209 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.209 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.209 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.210 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.210 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.210 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.210 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.210 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.211 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.211 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.211 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.211 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.211 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.212 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.212 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.212 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.212 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.212 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.212 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.213 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.213 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.213 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.213 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.213 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.214 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.214 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.214 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.214 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.214 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.214 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.215 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.215 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.215 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.215 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.215 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.216 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.216 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.216 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.216 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.216 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.216 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.217 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.217 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.217 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.217 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.217 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.218 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.218 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.218 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.218 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.218 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.219 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.219 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.219 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.219 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.219 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.219 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.220 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.220 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.220 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.220 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.220 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.221 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.221 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.221 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.221 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.221 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.222 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.222 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.222 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.222 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.222 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.223 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.223 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.223 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.223 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.223 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.223 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.224 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.224 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.224 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.224 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.224 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.225 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.225 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.225 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.225 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.225 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.225 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.226 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.226 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.226 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.226 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.226 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.227 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.227 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.227 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.227 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.227 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.227 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.228 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.228 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.228 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.228 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.228 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.229 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.229 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.229 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.229 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.229 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.230 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.230 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.230 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.230 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.230 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.230 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.231 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.231 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.231 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.231 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.231 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.232 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.232 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.232 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.232 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.232 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.232 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.233 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.233 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.233 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.233 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.233 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.234 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.234 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.234 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.234 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.234 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.234 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.235 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.235 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.235 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.235 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.235 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.236 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.236 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.236 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.236 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.236 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.236 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.237 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.237 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.237 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.237 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.237 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.238 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.239 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.240 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.241 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.242 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.243 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.244 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.245 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.246 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.247 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.248 2 DEBUG oslo_service.service [None req-2e264f3d-9f4e-4847-830e-f92f7dfb8f8e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.249 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.280 2 INFO nova.virt.node [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Determined node identity 36221146-244b-49ab-8700-5471fa19d0c5 from /var/lib/nova/compute_id#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.280 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.281 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.281 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.281 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.291 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.293 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.294 2 INFO nova.virt.libvirt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Connection event '1' reason 'None'#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.300 2 INFO nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Libvirt host capabilities Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 26eb4766-c662-4233-bdfd-7faae464b2de Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: x86_64 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v4 Oct 5 05:46:59 localhost nova_compute[297130]: AMD Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: tcp Oct 5 05:46:59 localhost nova_compute[297130]: rdma Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 16116612 Oct 5 05:46:59 localhost nova_compute[297130]: 4029153 Oct 5 05:46:59 localhost nova_compute[297130]: 0 Oct 5 05:46:59 localhost nova_compute[297130]: 0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: selinux Oct 5 05:46:59 localhost nova_compute[297130]: 0 Oct 5 05:46:59 localhost nova_compute[297130]: system_u:system_r:svirt_t:s0 Oct 5 05:46:59 localhost nova_compute[297130]: system_u:system_r:svirt_tcg_t:s0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: dac Oct 5 05:46:59 localhost nova_compute[297130]: 0 Oct 5 05:46:59 localhost nova_compute[297130]: +107:+107 Oct 5 05:46:59 localhost nova_compute[297130]: +107:+107 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: hvm Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 32 Oct 5 05:46:59 localhost nova_compute[297130]: /usr/libexec/qemu-kvm Oct 5 05:46:59 localhost nova_compute[297130]: pc-i440fx-rhel7.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: q35 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.4.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.5.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.3.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel7.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.4.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.2.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.2.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.0.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.0.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.1.0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: hvm Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 64 Oct 5 05:46:59 localhost nova_compute[297130]: /usr/libexec/qemu-kvm Oct 5 05:46:59 localhost nova_compute[297130]: pc-i440fx-rhel7.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: q35 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.4.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.5.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.3.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel7.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.4.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.2.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.2.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.0.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.0.0 Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel8.1.0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: #033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.307 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.311 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/libexec/qemu-kvm Oct 5 05:46:59 localhost nova_compute[297130]: kvm Oct 5 05:46:59 localhost nova_compute[297130]: pc-i440fx-rhel7.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: i686 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: rom Oct 5 05:46:59 localhost nova_compute[297130]: pflash Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: yes Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: AMD Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 486 Oct 5 05:46:59 localhost nova_compute[297130]: 486-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Conroe Oct 5 05:46:59 localhost nova_compute[297130]: Conroe-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-IBPB Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v4 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v1 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v2 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v6 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v7 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Penryn Oct 5 05:46:59 localhost nova_compute[297130]: Penryn-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Westmere Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v2 Oct 5 05:46:59 localhost nova_compute[297130]: athlon Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: athlon-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: kvm32 Oct 5 05:46:59 localhost nova_compute[297130]: kvm32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: n270 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: n270-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pentium Oct 5 05:46:59 localhost nova_compute[297130]: pentium-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: phenom Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: phenom-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu32 Oct 5 05:46:59 localhost nova_compute[297130]: qemu32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: file Oct 5 05:46:59 localhost nova_compute[297130]: anonymous Oct 5 05:46:59 localhost nova_compute[297130]: memfd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: disk Oct 5 05:46:59 localhost nova_compute[297130]: cdrom Oct 5 05:46:59 localhost nova_compute[297130]: floppy Oct 5 05:46:59 localhost nova_compute[297130]: lun Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: ide Oct 5 05:46:59 localhost nova_compute[297130]: fdc Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: sata Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: vnc Oct 5 05:46:59 localhost nova_compute[297130]: egl-headless Oct 5 05:46:59 localhost nova_compute[297130]: dbus Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: subsystem Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: mandatory Oct 5 05:46:59 localhost nova_compute[297130]: requisite Oct 5 05:46:59 localhost nova_compute[297130]: optional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: pci Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: random Oct 5 05:46:59 localhost nova_compute[297130]: egd Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: path Oct 5 05:46:59 localhost nova_compute[297130]: handle Oct 5 05:46:59 localhost nova_compute[297130]: virtiofs Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: tpm-tis Oct 5 05:46:59 localhost nova_compute[297130]: tpm-crb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: emulator Oct 5 05:46:59 localhost nova_compute[297130]: external Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 2.0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pty Oct 5 05:46:59 localhost nova_compute[297130]: unix Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: passt Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: isa Oct 5 05:46:59 localhost nova_compute[297130]: hyperv Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: relaxed Oct 5 05:46:59 localhost nova_compute[297130]: vapic Oct 5 05:46:59 localhost nova_compute[297130]: spinlocks Oct 5 05:46:59 localhost nova_compute[297130]: vpindex Oct 5 05:46:59 localhost nova_compute[297130]: runtime Oct 5 05:46:59 localhost nova_compute[297130]: synic Oct 5 05:46:59 localhost nova_compute[297130]: stimer Oct 5 05:46:59 localhost nova_compute[297130]: reset Oct 5 05:46:59 localhost nova_compute[297130]: vendor_id Oct 5 05:46:59 localhost nova_compute[297130]: frequencies Oct 5 05:46:59 localhost nova_compute[297130]: reenlightenment Oct 5 05:46:59 localhost nova_compute[297130]: tlbflush Oct 5 05:46:59 localhost nova_compute[297130]: ipi Oct 5 05:46:59 localhost nova_compute[297130]: avic Oct 5 05:46:59 localhost nova_compute[297130]: emsr_bitmap Oct 5 05:46:59 localhost nova_compute[297130]: xmm_input Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.314 2 DEBUG nova.virt.libvirt.volume.mount [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.319 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/libexec/qemu-kvm Oct 5 05:46:59 localhost nova_compute[297130]: kvm Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: i686 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: rom Oct 5 05:46:59 localhost nova_compute[297130]: pflash Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: yes Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: AMD Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 486 Oct 5 05:46:59 localhost nova_compute[297130]: 486-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Conroe Oct 5 05:46:59 localhost nova_compute[297130]: Conroe-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-IBPB Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v4 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v1 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v2 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v6 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v7 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Penryn Oct 5 05:46:59 localhost nova_compute[297130]: Penryn-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Westmere Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v2 Oct 5 05:46:59 localhost nova_compute[297130]: athlon Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: athlon-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: kvm32 Oct 5 05:46:59 localhost nova_compute[297130]: kvm32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: n270 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: n270-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pentium Oct 5 05:46:59 localhost nova_compute[297130]: pentium-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: phenom Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: phenom-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu32 Oct 5 05:46:59 localhost nova_compute[297130]: qemu32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: file Oct 5 05:46:59 localhost nova_compute[297130]: anonymous Oct 5 05:46:59 localhost nova_compute[297130]: memfd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: disk Oct 5 05:46:59 localhost nova_compute[297130]: cdrom Oct 5 05:46:59 localhost nova_compute[297130]: floppy Oct 5 05:46:59 localhost nova_compute[297130]: lun Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: fdc Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: sata Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: vnc Oct 5 05:46:59 localhost nova_compute[297130]: egl-headless Oct 5 05:46:59 localhost nova_compute[297130]: dbus Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: subsystem Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: mandatory Oct 5 05:46:59 localhost nova_compute[297130]: requisite Oct 5 05:46:59 localhost nova_compute[297130]: optional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: pci Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: random Oct 5 05:46:59 localhost nova_compute[297130]: egd Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: path Oct 5 05:46:59 localhost nova_compute[297130]: handle Oct 5 05:46:59 localhost nova_compute[297130]: virtiofs Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: tpm-tis Oct 5 05:46:59 localhost nova_compute[297130]: tpm-crb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: emulator Oct 5 05:46:59 localhost nova_compute[297130]: external Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 2.0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pty Oct 5 05:46:59 localhost nova_compute[297130]: unix Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: passt Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: isa Oct 5 05:46:59 localhost nova_compute[297130]: hyperv Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: relaxed Oct 5 05:46:59 localhost nova_compute[297130]: vapic Oct 5 05:46:59 localhost nova_compute[297130]: spinlocks Oct 5 05:46:59 localhost nova_compute[297130]: vpindex Oct 5 05:46:59 localhost nova_compute[297130]: runtime Oct 5 05:46:59 localhost nova_compute[297130]: synic Oct 5 05:46:59 localhost nova_compute[297130]: stimer Oct 5 05:46:59 localhost nova_compute[297130]: reset Oct 5 05:46:59 localhost nova_compute[297130]: vendor_id Oct 5 05:46:59 localhost nova_compute[297130]: frequencies Oct 5 05:46:59 localhost nova_compute[297130]: reenlightenment Oct 5 05:46:59 localhost nova_compute[297130]: tlbflush Oct 5 05:46:59 localhost nova_compute[297130]: ipi Oct 5 05:46:59 localhost nova_compute[297130]: avic Oct 5 05:46:59 localhost nova_compute[297130]: emsr_bitmap Oct 5 05:46:59 localhost nova_compute[297130]: xmm_input Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.353 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.358 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/libexec/qemu-kvm Oct 5 05:46:59 localhost nova_compute[297130]: kvm Oct 5 05:46:59 localhost nova_compute[297130]: pc-i440fx-rhel7.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: x86_64 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/OVMF/OVMF_CODE.secboot.fd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: rom Oct 5 05:46:59 localhost nova_compute[297130]: pflash Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: yes Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: AMD Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 486 Oct 5 05:46:59 localhost nova_compute[297130]: 486-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Conroe Oct 5 05:46:59 localhost nova_compute[297130]: Conroe-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-IBPB Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v4 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v1 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v2 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v6 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v7 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Penryn Oct 5 05:46:59 localhost nova_compute[297130]: Penryn-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Westmere Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v2 Oct 5 05:46:59 localhost nova_compute[297130]: athlon Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: athlon-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: kvm32 Oct 5 05:46:59 localhost nova_compute[297130]: kvm32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: n270 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: n270-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pentium Oct 5 05:46:59 localhost nova_compute[297130]: pentium-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: phenom Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: phenom-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu32 Oct 5 05:46:59 localhost nova_compute[297130]: qemu32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: file Oct 5 05:46:59 localhost nova_compute[297130]: anonymous Oct 5 05:46:59 localhost nova_compute[297130]: memfd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: disk Oct 5 05:46:59 localhost nova_compute[297130]: cdrom Oct 5 05:46:59 localhost nova_compute[297130]: floppy Oct 5 05:46:59 localhost nova_compute[297130]: lun Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: ide Oct 5 05:46:59 localhost nova_compute[297130]: fdc Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: sata Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: vnc Oct 5 05:46:59 localhost nova_compute[297130]: egl-headless Oct 5 05:46:59 localhost nova_compute[297130]: dbus Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: subsystem Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: mandatory Oct 5 05:46:59 localhost nova_compute[297130]: requisite Oct 5 05:46:59 localhost nova_compute[297130]: optional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: pci Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: random Oct 5 05:46:59 localhost nova_compute[297130]: egd Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: path Oct 5 05:46:59 localhost nova_compute[297130]: handle Oct 5 05:46:59 localhost nova_compute[297130]: virtiofs Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: tpm-tis Oct 5 05:46:59 localhost nova_compute[297130]: tpm-crb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: emulator Oct 5 05:46:59 localhost nova_compute[297130]: external Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 2.0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pty Oct 5 05:46:59 localhost nova_compute[297130]: unix Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: passt Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: isa Oct 5 05:46:59 localhost nova_compute[297130]: hyperv Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: relaxed Oct 5 05:46:59 localhost nova_compute[297130]: vapic Oct 5 05:46:59 localhost nova_compute[297130]: spinlocks Oct 5 05:46:59 localhost nova_compute[297130]: vpindex Oct 5 05:46:59 localhost nova_compute[297130]: runtime Oct 5 05:46:59 localhost nova_compute[297130]: synic Oct 5 05:46:59 localhost nova_compute[297130]: stimer Oct 5 05:46:59 localhost nova_compute[297130]: reset Oct 5 05:46:59 localhost nova_compute[297130]: vendor_id Oct 5 05:46:59 localhost nova_compute[297130]: frequencies Oct 5 05:46:59 localhost nova_compute[297130]: reenlightenment Oct 5 05:46:59 localhost nova_compute[297130]: tlbflush Oct 5 05:46:59 localhost nova_compute[297130]: ipi Oct 5 05:46:59 localhost nova_compute[297130]: avic Oct 5 05:46:59 localhost nova_compute[297130]: emsr_bitmap Oct 5 05:46:59 localhost nova_compute[297130]: xmm_input Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.413 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/libexec/qemu-kvm Oct 5 05:46:59 localhost nova_compute[297130]: kvm Oct 5 05:46:59 localhost nova_compute[297130]: pc-q35-rhel9.6.0 Oct 5 05:46:59 localhost nova_compute[297130]: x86_64 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: efi Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/edk2/ovmf/OVMF_CODE.fd Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Oct 5 05:46:59 localhost nova_compute[297130]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: rom Oct 5 05:46:59 localhost nova_compute[297130]: pflash Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: yes Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: yes Oct 5 05:46:59 localhost nova_compute[297130]: no Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: on Oct 5 05:46:59 localhost nova_compute[297130]: off Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: AMD Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 486 Oct 5 05:46:59 localhost nova_compute[297130]: 486-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Broadwell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cascadelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Conroe Oct 5 05:46:59 localhost nova_compute[297130]: Conroe-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Cooperlake-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Denverton-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Dhyana-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Genoa-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-IBPB Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Milan-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-Rome-v4 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v1 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v2 Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: EPYC-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: GraniteRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Haswell-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-noTSX Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v6 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Icelake-Server-v7 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: IvyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: KnightsMill-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Nehalem-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G1-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G4-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Opteron_G5-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Penryn Oct 5 05:46:59 localhost nova_compute[297130]: Penryn-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: SandyBridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SapphireRapids-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: SierraForest-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Client-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-noTSX-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Skylake-Server-v5 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v2 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v3 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Snowridge-v4 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Westmere Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-IBRS Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Westmere-v2 Oct 5 05:46:59 localhost nova_compute[297130]: athlon Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: athlon-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: core2duo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: coreduo-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: kvm32 Oct 5 05:46:59 localhost nova_compute[297130]: kvm32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64 Oct 5 05:46:59 localhost nova_compute[297130]: kvm64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: n270 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: n270-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pentium Oct 5 05:46:59 localhost nova_compute[297130]: pentium-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2 Oct 5 05:46:59 localhost nova_compute[297130]: pentium2-v1 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3 Oct 5 05:46:59 localhost nova_compute[297130]: pentium3-v1 Oct 5 05:46:59 localhost nova_compute[297130]: phenom Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: phenom-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu32 Oct 5 05:46:59 localhost nova_compute[297130]: qemu32-v1 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64 Oct 5 05:46:59 localhost nova_compute[297130]: qemu64-v1 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: file Oct 5 05:46:59 localhost nova_compute[297130]: anonymous Oct 5 05:46:59 localhost nova_compute[297130]: memfd Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: disk Oct 5 05:46:59 localhost nova_compute[297130]: cdrom Oct 5 05:46:59 localhost nova_compute[297130]: floppy Oct 5 05:46:59 localhost nova_compute[297130]: lun Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: fdc Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: sata Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: vnc Oct 5 05:46:59 localhost nova_compute[297130]: egl-headless Oct 5 05:46:59 localhost nova_compute[297130]: dbus Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: subsystem Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: mandatory Oct 5 05:46:59 localhost nova_compute[297130]: requisite Oct 5 05:46:59 localhost nova_compute[297130]: optional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: pci Oct 5 05:46:59 localhost nova_compute[297130]: scsi Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: virtio Oct 5 05:46:59 localhost nova_compute[297130]: virtio-transitional Oct 5 05:46:59 localhost nova_compute[297130]: virtio-non-transitional Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: random Oct 5 05:46:59 localhost nova_compute[297130]: egd Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: path Oct 5 05:46:59 localhost nova_compute[297130]: handle Oct 5 05:46:59 localhost nova_compute[297130]: virtiofs Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: tpm-tis Oct 5 05:46:59 localhost nova_compute[297130]: tpm-crb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: emulator Oct 5 05:46:59 localhost nova_compute[297130]: external Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: 2.0 Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: usb Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: pty Oct 5 05:46:59 localhost nova_compute[297130]: unix Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: qemu Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: builtin Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: default Oct 5 05:46:59 localhost nova_compute[297130]: passt Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: isa Oct 5 05:46:59 localhost nova_compute[297130]: hyperv Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: relaxed Oct 5 05:46:59 localhost nova_compute[297130]: vapic Oct 5 05:46:59 localhost nova_compute[297130]: spinlocks Oct 5 05:46:59 localhost nova_compute[297130]: vpindex Oct 5 05:46:59 localhost nova_compute[297130]: runtime Oct 5 05:46:59 localhost nova_compute[297130]: synic Oct 5 05:46:59 localhost nova_compute[297130]: stimer Oct 5 05:46:59 localhost nova_compute[297130]: reset Oct 5 05:46:59 localhost nova_compute[297130]: vendor_id Oct 5 05:46:59 localhost nova_compute[297130]: frequencies Oct 5 05:46:59 localhost nova_compute[297130]: reenlightenment Oct 5 05:46:59 localhost nova_compute[297130]: tlbflush Oct 5 05:46:59 localhost nova_compute[297130]: ipi Oct 5 05:46:59 localhost nova_compute[297130]: avic Oct 5 05:46:59 localhost nova_compute[297130]: emsr_bitmap Oct 5 05:46:59 localhost nova_compute[297130]: xmm_input Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: Oct 5 05:46:59 localhost nova_compute[297130]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.462 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.462 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.462 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.462 2 INFO nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Secure Boot support detected#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.464 2 INFO nova.virt.libvirt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.465 2 INFO nova.virt.libvirt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.474 2 DEBUG nova.virt.libvirt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.528 2 INFO nova.virt.node [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Determined node identity 36221146-244b-49ab-8700-5471fa19d0c5 from /var/lib/nova/compute_id#033[00m Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.611 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Verified node 36221146-244b-49ab-8700-5471fa19d0c5 matches my host np0005471152.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Oct 5 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay-dd4335f3e4ff83c4867d5fedd8c555a32f879458e8700fed3aabdf74a30a71d3-merged.mount: Deactivated successfully. Oct 5 05:46:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-472b23fa7234746a25c99f8ea1e583e0bf7a9cdb88383f16cd86fd6e349cc6b7-userdata-shm.mount: Deactivated successfully. Oct 5 05:46:59 localhost nova_compute[297130]: 2025-10-05 09:46:59.784 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.180 2 DEBUG oslo_concurrency.lockutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.180 2 DEBUG oslo_concurrency.lockutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.180 2 DEBUG oslo_concurrency.lockutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.180 2 DEBUG nova.compute.resource_tracker [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.181 2 DEBUG oslo_concurrency.processutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:47:00 localhost systemd[1]: session-62.scope: Deactivated successfully. Oct 5 05:47:00 localhost systemd[1]: session-62.scope: Consumed 1min 51.098s CPU time. Oct 5 05:47:00 localhost systemd-logind[760]: Session 62 logged out. Waiting for processes to exit. Oct 5 05:47:00 localhost systemd-logind[760]: Removed session 62. Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.673 2 DEBUG oslo_concurrency.processutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.865 2 WARNING nova.virt.libvirt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.867 2 DEBUG nova.compute.resource_tracker [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12466MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.867 2 DEBUG oslo_concurrency.lockutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.867 2 DEBUG oslo_concurrency.lockutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4018 DF PROTO=TCP SPT=33518 DPT=9102 SEQ=4203593253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC770F6E60000000001030307) Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.985 2 DEBUG nova.compute.resource_tracker [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:47:00 localhost nova_compute[297130]: 2025-10-05 09:47:00.985 2 DEBUG nova.compute.resource_tracker [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.166 2 DEBUG nova.scheduler.client.report [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.353 2 DEBUG nova.scheduler.client.report [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.353 2 DEBUG nova.compute.provider_tree [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.370 2 DEBUG nova.scheduler.client.report [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.401 2 DEBUG nova.scheduler.client.report [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.418 2 DEBUG oslo_concurrency.processutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.873 2 DEBUG oslo_concurrency.processutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.880 2 DEBUG nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Oct 5 05:47:01 localhost nova_compute[297130]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.880 2 INFO nova.virt.libvirt.host [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] kernel doesn't support AMD SEV#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.881 2 DEBUG nova.compute.provider_tree [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.882 2 DEBUG nova.virt.libvirt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.946 2 DEBUG nova.scheduler.client.report [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:47:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4019 DF PROTO=TCP SPT=33518 DPT=9102 SEQ=4203593253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC770FAF70000000001030307) Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.985 2 DEBUG nova.compute.resource_tracker [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.986 2 DEBUG oslo_concurrency.lockutils [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:01 localhost nova_compute[297130]: 2025-10-05 09:47:01.986 2 DEBUG nova.service [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Oct 5 05:47:02 localhost nova_compute[297130]: 2025-10-05 09:47:02.032 2 DEBUG nova.service [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Oct 5 05:47:02 localhost nova_compute[297130]: 2025-10-05 09:47:02.033 2 DEBUG nova.servicegroup.drivers.db [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] DB_Driver: join new ServiceGroup member np0005471152.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Oct 5 05:47:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4020 DF PROTO=TCP SPT=33518 DPT=9102 SEQ=4203593253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77102F60000000001030307) Oct 5 05:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:47:06 localhost podman[297426]: 2025-10-05 09:47:06.923505154 +0000 UTC m=+0.084517844 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:47:06 localhost podman[297426]: 2025-10-05 09:47:06.934406298 +0000 UTC m=+0.095418958 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:47:06 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:47:07 localhost podman[297425]: 2025-10-05 09:47:07.031921271 +0000 UTC m=+0.192459529 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:47:07 localhost podman[297425]: 2025-10-05 09:47:07.046447295 +0000 UTC m=+0.206985603 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true) Oct 5 05:47:07 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:47:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4021 DF PROTO=TCP SPT=33518 DPT=9102 SEQ=4203593253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77112B70000000001030307) Oct 5 05:47:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:47:08 localhost podman[297467]: 2025-10-05 09:47:08.913157498 +0000 UTC m=+0.080852675 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Oct 5 05:47:08 localhost podman[297467]: 2025-10-05 09:47:08.955133312 +0000 UTC m=+0.122828449 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter) Oct 5 05:47:08 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:47:13 localhost podman[297487]: 2025-10-05 09:47:13.918943793 +0000 UTC m=+0.082328166 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent) Oct 5 05:47:13 localhost podman[297487]: 2025-10-05 09:47:13.92810466 +0000 UTC m=+0.091489073 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Oct 5 05:47:13 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:47:16 localhost openstack_network_exporter[250246]: ERROR 09:47:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:47:16 localhost openstack_network_exporter[250246]: ERROR 09:47:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:16 localhost openstack_network_exporter[250246]: ERROR 09:47:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:16 localhost openstack_network_exporter[250246]: ERROR 09:47:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:47:16 localhost openstack_network_exporter[250246]: Oct 5 05:47:16 localhost openstack_network_exporter[250246]: ERROR 09:47:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:47:16 localhost openstack_network_exporter[250246]: Oct 5 05:47:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:47:18 localhost podman[297507]: 2025-10-05 09:47:18.910967484 +0000 UTC m=+0.083501776 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:47:18 localhost podman[297507]: 2025-10-05 09:47:18.948070576 +0000 UTC m=+0.120604828 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:47:18 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:47:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:47:20.384 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:47:20.384 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:47:20.384 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:47:20 localhost podman[297531]: 2025-10-05 09:47:20.907940507 +0000 UTC m=+0.077006182 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 05:47:20 localhost podman[297531]: 2025-10-05 09:47:20.922366876 +0000 UTC m=+0.091432571 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:47:20 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:47:25 localhost podman[297567]: 2025-10-05 09:47:25.934295896 +0000 UTC m=+0.095754267 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:47:25 localhost podman[297564]: 2025-10-05 09:47:25.910849333 +0000 UTC m=+0.079563991 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:47:25 localhost podman[297567]: 2025-10-05 09:47:25.966427233 +0000 UTC m=+0.127885664 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller) Oct 5 05:47:25 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:47:25 localhost podman[297564]: 2025-10-05 09:47:25.995211042 +0000 UTC m=+0.163925620 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:47:26 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:47:26 localhost podman[248157]: time="2025-10-05T09:47:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:47:26 localhost podman[248157]: @ - - [05/Oct/2025:09:47:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:47:26 localhost podman[248157]: @ - - [05/Oct/2025:09:47:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17825 "" "Go-http-client/1.1" Oct 5 05:47:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27446 DF PROTO=TCP SPT=58432 DPT=9102 SEQ=1461155490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7716C160000000001030307) Oct 5 05:47:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27447 DF PROTO=TCP SPT=58432 DPT=9102 SEQ=1461155490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77170360000000001030307) Oct 5 05:47:33 localhost nova_compute[297130]: 2025-10-05 09:47:33.035 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:33 localhost nova_compute[297130]: 2025-10-05 09:47:33.054 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163196]: 2025-10-05 09:47:33.884 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:47:33 localhost ovn_metadata_agent[163196]: 2025-10-05 09:47:33.886 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 05:47:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27448 DF PROTO=TCP SPT=58432 DPT=9102 SEQ=1461155490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77178360000000001030307) Oct 5 05:47:34 localhost ovn_metadata_agent[163196]: 2025-10-05 09:47:34.888 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:47:37 localhost podman[297680]: 2025-10-05 09:47:37.918488398 +0000 UTC m=+0.088320137 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:47:37 localhost podman[297680]: 2025-10-05 09:47:37.954111381 +0000 UTC m=+0.123943080 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 05:47:37 localhost systemd[1]: tmp-crun.Hf361s.mount: Deactivated successfully. Oct 5 05:47:37 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:47:37 localhost podman[297681]: 2025-10-05 09:47:37.976098234 +0000 UTC m=+0.142023687 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:47:37 localhost podman[297681]: 2025-10-05 09:47:37.989132296 +0000 UTC m=+0.155057719 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:47:38 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:47:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27449 DF PROTO=TCP SPT=58432 DPT=9102 SEQ=1461155490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77187F70000000001030307) Oct 5 05:47:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:47:39 localhost systemd[1]: tmp-crun.mfrpkS.mount: Deactivated successfully. Oct 5 05:47:39 localhost podman[297722]: 2025-10-05 09:47:39.919658423 +0000 UTC m=+0.081187644 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7) Oct 5 05:47:39 localhost podman[297722]: 2025-10-05 09:47:39.935104951 +0000 UTC m=+0.096634162 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:47:39 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:47:44 localhost podman[297743]: 2025-10-05 09:47:44.916464645 +0000 UTC m=+0.083122506 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent) Oct 5 05:47:44 localhost podman[297743]: 2025-10-05 09:47:44.926332371 +0000 UTC m=+0.092990242 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:47:44 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:47:46 localhost openstack_network_exporter[250246]: ERROR 09:47:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:46 localhost openstack_network_exporter[250246]: ERROR 09:47:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:47:46 localhost openstack_network_exporter[250246]: ERROR 09:47:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:47:46 localhost openstack_network_exporter[250246]: ERROR 09:47:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:47:46 localhost openstack_network_exporter[250246]: Oct 5 05:47:46 localhost openstack_network_exporter[250246]: ERROR 09:47:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:47:46 localhost openstack_network_exporter[250246]: Oct 5 05:47:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:47:49 localhost podman[297761]: 2025-10-05 09:47:49.915953979 +0000 UTC m=+0.082618794 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:47:49 localhost podman[297761]: 2025-10-05 09:47:49.930188263 +0000 UTC m=+0.096853108 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:47:49 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:47:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:47:51 localhost podman[297784]: 2025-10-05 09:47:51.91073762 +0000 UTC m=+0.080376252 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:47:51 localhost podman[297784]: 2025-10-05 09:47:51.953239838 +0000 UTC m=+0.122878860 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 05:47:51 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:47:56 localhost podman[248157]: time="2025-10-05T09:47:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:47:56 localhost podman[248157]: @ - - [05/Oct/2025:09:47:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:47:56 localhost podman[248157]: @ - - [05/Oct/2025:09:47:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17826 "" "Go-http-client/1.1" Oct 5 05:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:47:56 localhost podman[297804]: 2025-10-05 09:47:56.91231908 +0000 UTC m=+0.079976621 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:47:56 localhost podman[297804]: 2025-10-05 09:47:56.920582853 +0000 UTC m=+0.088240434 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:47:56 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:47:56 localhost podman[297805]: 2025-10-05 09:47:56.977458109 +0000 UTC m=+0.140939118 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Oct 5 05:47:57 localhost podman[297805]: 2025-10-05 09:47:57.055280431 +0000 UTC m=+0.218761440 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 05:47:57 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.276 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.276 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.276 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.294 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.294 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.295 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.296 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.296 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.297 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.297 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.297 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.298 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.318 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.319 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.319 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.319 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.320 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.767 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.962 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.964 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12461MB free_disk=41.8370475769043GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.964 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:47:58 localhost nova_compute[297130]: 2025-10-05 09:47:58.965 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.017 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.017 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.035 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.476 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.482 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.498 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.499 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:47:59 localhost nova_compute[297130]: 2025-10-05 09:47:59.500 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2077 DF PROTO=TCP SPT=33576 DPT=9102 SEQ=4174605260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC771E1460000000001030307) Oct 5 05:48:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2078 DF PROTO=TCP SPT=33576 DPT=9102 SEQ=4174605260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC771E5360000000001030307) Oct 5 05:48:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2079 DF PROTO=TCP SPT=33576 DPT=9102 SEQ=4174605260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC771ED360000000001030307) Oct 5 05:48:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2080 DF PROTO=TCP SPT=33576 DPT=9102 SEQ=4174605260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC771FCF60000000001030307) Oct 5 05:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:48:08 localhost podman[297890]: 2025-10-05 09:48:08.914848348 +0000 UTC m=+0.081884044 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:48:08 localhost podman[297890]: 2025-10-05 09:48:08.928436864 +0000 UTC m=+0.095472560 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:48:08 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:48:09 localhost systemd[1]: tmp-crun.kJ5nY8.mount: Deactivated successfully. Oct 5 05:48:09 localhost podman[297891]: 2025-10-05 09:48:09.024666014 +0000 UTC m=+0.187927357 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:09 localhost podman[297891]: 2025-10-05 09:48:09.061132759 +0000 UTC m=+0.224394082 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:09 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:48:10 localhost podman[297932]: 2025-10-05 09:48:10.913661978 +0000 UTC m=+0.079347334 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7) Oct 5 05:48:10 localhost podman[297932]: 2025-10-05 09:48:10.955332784 +0000 UTC m=+0.121018120 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container) Oct 5 05:48:10 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:48:15 localhost systemd[1]: tmp-crun.NpgTVg.mount: Deactivated successfully. Oct 5 05:48:15 localhost podman[297953]: 2025-10-05 09:48:15.90641167 +0000 UTC m=+0.080266118 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 5 05:48:15 localhost podman[297953]: 2025-10-05 09:48:15.911557149 +0000 UTC m=+0.085411567 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 05:48:15 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:48:16 localhost openstack_network_exporter[250246]: ERROR 09:48:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:48:16 localhost openstack_network_exporter[250246]: ERROR 09:48:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:16 localhost openstack_network_exporter[250246]: ERROR 09:48:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:16 localhost openstack_network_exporter[250246]: ERROR 09:48:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:48:16 localhost openstack_network_exporter[250246]: Oct 5 05:48:16 localhost openstack_network_exporter[250246]: ERROR 09:48:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:48:16 localhost openstack_network_exporter[250246]: Oct 5 05:48:18 localhost ovn_metadata_agent[163196]: 2025-10-05 09:48:18.483 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 05:48:18 localhost ovn_metadata_agent[163196]: 2025-10-05 09:48:18.485 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:48:20.385 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:48:20.386 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:48:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:48:20.386 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:48:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:48:20 localhost podman[297969]: 2025-10-05 09:48:20.937235922 +0000 UTC m=+0.107799623 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:20 localhost podman[297969]: 2025-10-05 09:48:20.971155328 +0000 UTC m=+0.141718999 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:20 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:48:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:48:22 localhost podman[297992]: 2025-10-05 09:48:22.913602618 +0000 UTC m=+0.080161387 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 05:48:22 localhost podman[297992]: 2025-10-05 09:48:22.952348433 +0000 UTC m=+0.118907242 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:48:22 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:48:26 localhost podman[248157]: time="2025-10-05T09:48:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:48:26 localhost podman[248157]: @ - - [05/Oct/2025:09:48:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:48:26 localhost podman[248157]: @ - - [05/Oct/2025:09:48:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17821 "" "Go-http-client/1.1" Oct 5 05:48:26 localhost ovn_metadata_agent[163196]: 2025-10-05 09:48:26.487 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 05:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:48:27 localhost systemd[1]: tmp-crun.PrJ2Ie.mount: Deactivated successfully. Oct 5 05:48:27 localhost podman[298030]: 2025-10-05 09:48:27.506086728 +0000 UTC m=+0.082802478 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:48:27 localhost podman[298029]: 2025-10-05 09:48:27.546378806 +0000 UTC m=+0.127029882 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid) Oct 5 05:48:27 localhost podman[298030]: 2025-10-05 09:48:27.567177109 +0000 UTC m=+0.143892819 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Oct 5 05:48:27 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:48:27 localhost podman[298029]: 2025-10-05 09:48:27.583423658 +0000 UTC m=+0.164074734 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible) Oct 5 05:48:27 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:48:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37320 DF PROTO=TCP SPT=59094 DPT=9102 SEQ=753449157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77256750000000001030307) Oct 5 05:48:31 localhost ovn_controller[157556]: 2025-10-05T09:48:31Z|00038|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory Oct 5 05:48:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37321 DF PROTO=TCP SPT=59094 DPT=9102 SEQ=753449157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7725A760000000001030307) Oct 5 05:48:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37322 DF PROTO=TCP SPT=59094 DPT=9102 SEQ=753449157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77262760000000001030307) Oct 5 05:48:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37323 DF PROTO=TCP SPT=59094 DPT=9102 SEQ=753449157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77272360000000001030307) Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:48:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:48:39 localhost podman[298140]: 2025-10-05 09:48:39.921592795 +0000 UTC m=+0.090127485 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:48:39 localhost podman[298140]: 2025-10-05 09:48:39.93065638 +0000 UTC m=+0.099191080 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:48:39 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:48:40 localhost podman[298141]: 2025-10-05 09:48:40.014405052 +0000 UTC m=+0.181011961 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:48:40 localhost podman[298141]: 2025-10-05 09:48:40.04839028 +0000 UTC m=+0.214997209 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:48:40 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:48:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:48:41 localhost podman[298181]: 2025-10-05 09:48:41.909651917 +0000 UTC m=+0.077318800 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.) Oct 5 05:48:41 localhost podman[298181]: 2025-10-05 09:48:41.951316921 +0000 UTC m=+0.118983794 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc.) Oct 5 05:48:41 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:48:46 localhost openstack_network_exporter[250246]: ERROR 09:48:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:48:46 localhost openstack_network_exporter[250246]: ERROR 09:48:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:46 localhost openstack_network_exporter[250246]: ERROR 09:48:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:48:46 localhost openstack_network_exporter[250246]: ERROR 09:48:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:48:46 localhost openstack_network_exporter[250246]: Oct 5 05:48:46 localhost openstack_network_exporter[250246]: ERROR 09:48:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:48:46 localhost openstack_network_exporter[250246]: Oct 5 05:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:48:46 localhost systemd[1]: tmp-crun.8isHr6.mount: Deactivated successfully. Oct 5 05:48:46 localhost podman[298202]: 2025-10-05 09:48:46.914207341 +0000 UTC m=+0.080181102 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:48:46 localhost podman[298202]: 2025-10-05 09:48:46.947143487 +0000 UTC m=+0.113117198 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:48:46 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:48:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:48:51 localhost podman[298221]: 2025-10-05 09:48:51.914427001 +0000 UTC m=+0.082707331 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:48:51 localhost podman[298221]: 2025-10-05 09:48:51.921076581 +0000 UTC m=+0.089356901 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:48:51 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:48:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:48:53 localhost systemd[1]: tmp-crun.QYdsTa.mount: Deactivated successfully. Oct 5 05:48:53 localhost podman[298244]: 2025-10-05 09:48:53.925406437 +0000 UTC m=+0.094003949 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 05:48:53 localhost podman[298244]: 2025-10-05 09:48:53.941468954 +0000 UTC m=+0.110066506 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:48:53 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:48:56 localhost podman[248157]: time="2025-10-05T09:48:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:48:56 localhost podman[248157]: @ - - [05/Oct/2025:09:48:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:48:56 localhost podman[248157]: @ - - [05/Oct/2025:09:48:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17823 "" "Go-http-client/1.1" Oct 5 05:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:48:57 localhost podman[298264]: 2025-10-05 09:48:57.914060313 +0000 UTC m=+0.078941559 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001) Oct 5 05:48:57 localhost podman[298264]: 2025-10-05 09:48:57.928098445 +0000 UTC m=+0.092979731 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:48:57 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:48:57 localhost systemd[1]: tmp-crun.Z3LjUH.mount: Deactivated successfully. Oct 5 05:48:57 localhost podman[298265]: 2025-10-05 09:48:57.972242354 +0000 UTC m=+0.135760573 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Oct 5 05:48:58 localhost podman[298265]: 2025-10-05 09:48:58.053131855 +0000 UTC m=+0.216650024 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 05:48:58 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.491 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.492 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.507 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.507 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.507 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.522 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.522 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.524 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.524 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.525 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.525 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:48:59 localhost nova_compute[297130]: 2025-10-05 09:48:59.525 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.309 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.310 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.310 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.310 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.311 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.764 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:49:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18363 DF PROTO=TCP SPT=44592 DPT=9102 SEQ=4159845216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC772CBA50000000001030307) Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.964 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.966 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12461MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.967 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:49:00 localhost nova_compute[297130]: 2025-10-05 09:49:00.967 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.024 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.025 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.043 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.496 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.502 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.527 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.529 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:49:01 localhost nova_compute[297130]: 2025-10-05 09:49:01.530 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:49:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18364 DF PROTO=TCP SPT=44592 DPT=9102 SEQ=4159845216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC772CFB60000000001030307) Oct 5 05:49:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18365 DF PROTO=TCP SPT=44592 DPT=9102 SEQ=4159845216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC772D7B60000000001030307) Oct 5 05:49:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18366 DF PROTO=TCP SPT=44592 DPT=9102 SEQ=4159845216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC772E7770000000001030307) Oct 5 05:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:49:10 localhost podman[298351]: 2025-10-05 09:49:10.920006925 +0000 UTC m=+0.090599565 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:49:10 localhost podman[298351]: 2025-10-05 09:49:10.933366449 +0000 UTC m=+0.103959099 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:49:10 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:49:11 localhost podman[298352]: 2025-10-05 09:49:11.021855496 +0000 UTC m=+0.188925910 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:49:11 localhost podman[298352]: 2025-10-05 09:49:11.059210572 +0000 UTC m=+0.226280966 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:49:11 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:49:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:49:12 localhost podman[298393]: 2025-10-05 09:49:12.9136541 +0000 UTC m=+0.080810099 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:49:12 localhost podman[298393]: 2025-10-05 09:49:12.930241551 +0000 UTC m=+0.097397570 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible) Oct 5 05:49:12 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:49:16 localhost openstack_network_exporter[250246]: ERROR 09:49:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:16 localhost openstack_network_exporter[250246]: ERROR 09:49:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:16 localhost openstack_network_exporter[250246]: ERROR 09:49:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:49:16 localhost openstack_network_exporter[250246]: ERROR 09:49:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:49:16 localhost openstack_network_exporter[250246]: Oct 5 05:49:16 localhost openstack_network_exporter[250246]: ERROR 09:49:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:49:16 localhost openstack_network_exporter[250246]: Oct 5 05:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:49:17 localhost podman[298413]: 2025-10-05 09:49:17.914607319 +0000 UTC m=+0.080685976 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:49:17 localhost podman[298413]: 2025-10-05 09:49:17.94811862 +0000 UTC m=+0.114197257 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible) Oct 5 05:49:17 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:49:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:49:20.386 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:49:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:49:20.387 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:49:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:49:20.387 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:49:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:49:22 localhost podman[298432]: 2025-10-05 09:49:22.908029914 +0000 UTC m=+0.078419233 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:49:22 localhost podman[298432]: 2025-10-05 09:49:22.945249946 +0000 UTC m=+0.115639225 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:49:22 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:49:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:49:24 localhost podman[298454]: 2025-10-05 09:49:24.904145955 +0000 UTC m=+0.075902305 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:49:24 localhost podman[298454]: 2025-10-05 09:49:24.919215806 +0000 UTC m=+0.090972196 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Oct 5 05:49:24 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:49:26 localhost podman[248157]: time="2025-10-05T09:49:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:49:26 localhost podman[248157]: @ - - [05/Oct/2025:09:49:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:49:26 localhost podman[248157]: @ - - [05/Oct/2025:09:49:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17820 "" "Go-http-client/1.1" Oct 5 05:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:49:28 localhost podman[298471]: 2025-10-05 09:49:28.911218323 +0000 UTC m=+0.076851102 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:49:28 localhost podman[298471]: 2025-10-05 09:49:28.945270939 +0000 UTC m=+0.110903748 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:49:28 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:49:29 localhost systemd[1]: tmp-crun.X8w2Hz.mount: Deactivated successfully. Oct 5 05:49:29 localhost podman[298472]: 2025-10-05 09:49:29.034402164 +0000 UTC m=+0.193273498 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:49:29 localhost podman[298472]: 2025-10-05 09:49:29.094445536 +0000 UTC m=+0.253316820 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001) Oct 5 05:49:29 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:49:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25634 DF PROTO=TCP SPT=53596 DPT=9102 SEQ=3356212568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77340D60000000001030307) Oct 5 05:49:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25635 DF PROTO=TCP SPT=53596 DPT=9102 SEQ=3356212568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC77344F60000000001030307) Oct 5 05:49:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25636 DF PROTO=TCP SPT=53596 DPT=9102 SEQ=3356212568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7734CF60000000001030307) Oct 5 05:49:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25637 DF PROTO=TCP SPT=53596 DPT=9102 SEQ=3356212568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC7735CB70000000001030307) Oct 5 05:49:41 localhost sshd[298657]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:49:41 localhost systemd-logind[760]: New session 65 of user zuul. Oct 5 05:49:41 localhost systemd[1]: Started Session 65 of User zuul. Oct 5 05:49:41 localhost podman[298660]: 2025-10-05 09:49:41.690731409 +0000 UTC m=+0.090233515 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:49:41 localhost podman[298659]: 2025-10-05 09:49:41.739738922 +0000 UTC m=+0.140876142 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Oct 5 05:49:41 localhost podman[298660]: 2025-10-05 09:49:41.750883805 +0000 UTC m=+0.150385911 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:49:41 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:49:41 localhost podman[298659]: 2025-10-05 09:49:41.779231506 +0000 UTC m=+0.180368726 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:49:41 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:49:41 localhost python3[298722]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:49:42 localhost subscription-manager[298724]: Unregistered machine with identity: 6e63178d-cc41-4e40-9232-0fd1da6b0dc1 Oct 5 05:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:49:43 localhost podman[298726]: 2025-10-05 09:49:43.91550614 +0000 UTC m=+0.082397492 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Oct 5 05:49:43 localhost podman[298726]: 2025-10-05 09:49:43.9574069 +0000 UTC m=+0.124298242 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal) Oct 5 05:49:43 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:49:46 localhost openstack_network_exporter[250246]: ERROR 09:49:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:49:46 localhost openstack_network_exporter[250246]: ERROR 09:49:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:46 localhost openstack_network_exporter[250246]: ERROR 09:49:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:49:46 localhost openstack_network_exporter[250246]: ERROR 09:49:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:49:46 localhost openstack_network_exporter[250246]: Oct 5 05:49:46 localhost openstack_network_exporter[250246]: ERROR 09:49:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:49:46 localhost openstack_network_exporter[250246]: Oct 5 05:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:49:48 localhost podman[298746]: 2025-10-05 09:49:48.920860309 +0000 UTC m=+0.075836783 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:49:48 localhost podman[298746]: 2025-10-05 09:49:48.955311627 +0000 UTC m=+0.110287961 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:49:48 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:49:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:49:53 localhost podman[298765]: 2025-10-05 09:49:53.916228795 +0000 UTC m=+0.080695726 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:49:53 localhost podman[298765]: 2025-10-05 09:49:53.923087971 +0000 UTC m=+0.087554842 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:49:53 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:49:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:49:55 localhost podman[298789]: 2025-10-05 09:49:55.130585594 +0000 UTC m=+0.075685649 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:49:55 localhost podman[298789]: 2025-10-05 09:49:55.143078583 +0000 UTC m=+0.088178588 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 05:49:55 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:49:56 localhost podman[248157]: time="2025-10-05T09:49:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:49:56 localhost podman[248157]: @ - - [05/Oct/2025:09:49:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:49:56 localhost podman[248157]: @ - - [05/Oct/2025:09:49:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17829 "" "Go-http-client/1.1" Oct 5 05:49:59 localhost nova_compute[297130]: 2025-10-05 09:49:59.531 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:59 localhost nova_compute[297130]: 2025-10-05 09:49:59.531 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:59 localhost nova_compute[297130]: 2025-10-05 09:49:59.532 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:49:59 localhost nova_compute[297130]: 2025-10-05 09:49:59.532 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:49:59 localhost systemd[1]: tmp-crun.kU8co9.mount: Deactivated successfully. Oct 5 05:49:59 localhost podman[298809]: 2025-10-05 09:49:59.646187603 +0000 UTC m=+0.083193314 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller) Oct 5 05:49:59 localhost podman[298809]: 2025-10-05 09:49:59.682442799 +0000 UTC m=+0.119448590 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:49:59 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:49:59 localhost podman[298808]: 2025-10-05 09:49:59.694739443 +0000 UTC m=+0.134256723 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:49:59 localhost podman[298808]: 2025-10-05 09:49:59.702955707 +0000 UTC m=+0.142472967 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:49:59 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.291 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.292 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.292 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.315 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.315 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.316 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.769 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:50:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11695 DF PROTO=TCP SPT=36424 DPT=9102 SEQ=2297067117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC773B6060000000001030307) Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.980 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.982 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12467MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.983 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:50:00 localhost nova_compute[297130]: 2025-10-05 09:50:00.983 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.047 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.048 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.074 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.529 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.537 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.553 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.556 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:50:01 localhost nova_compute[297130]: 2025-10-05 09:50:01.556 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:50:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11696 DF PROTO=TCP SPT=36424 DPT=9102 SEQ=2297067117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC773B9F60000000001030307) Oct 5 05:50:02 localhost nova_compute[297130]: 2025-10-05 09:50:02.537 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:02 localhost nova_compute[297130]: 2025-10-05 09:50:02.538 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:02 localhost nova_compute[297130]: 2025-10-05 09:50:02.538 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:50:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11697 DF PROTO=TCP SPT=36424 DPT=9102 SEQ=2297067117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC773C1F60000000001030307) Oct 5 05:50:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:ea:6c:eb MACDST=fa:16:3e:31:b6:99 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11698 DF PROTO=TCP SPT=36424 DPT=9102 SEQ=2297067117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AC773D1B60000000001030307) Oct 5 05:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:50:11 localhost podman[298895]: 2025-10-05 09:50:11.913382125 +0000 UTC m=+0.082127504 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:50:11 localhost podman[298895]: 2025-10-05 09:50:11.922578385 +0000 UTC m=+0.091323744 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 5 05:50:11 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:50:12 localhost podman[298896]: 2025-10-05 09:50:12.044487622 +0000 UTC m=+0.209863579 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:50:12 localhost podman[298896]: 2025-10-05 09:50:12.055532822 +0000 UTC m=+0.220908789 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:50:12 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:50:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:50:14 localhost podman[298969]: 2025-10-05 09:50:14.733875769 +0000 UTC m=+0.084634264 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container) Oct 5 05:50:14 localhost podman[298969]: 2025-10-05 09:50:14.750176643 +0000 UTC m=+0.100935138 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git) Oct 5 05:50:14 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:50:16 localhost openstack_network_exporter[250246]: ERROR 09:50:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:50:16 localhost openstack_network_exporter[250246]: ERROR 09:50:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:16 localhost openstack_network_exporter[250246]: ERROR 09:50:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:16 localhost openstack_network_exporter[250246]: ERROR 09:50:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:50:16 localhost openstack_network_exporter[250246]: Oct 5 05:50:16 localhost openstack_network_exporter[250246]: ERROR 09:50:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:50:16 localhost openstack_network_exporter[250246]: Oct 5 05:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:50:19 localhost podman[299008]: 2025-10-05 09:50:19.916018717 +0000 UTC m=+0.081735744 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:50:19 localhost podman[299008]: 2025-10-05 09:50:19.947131754 +0000 UTC m=+0.112848781 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:50:19 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:50:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:50:20.387 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:50:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:50:20.388 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:50:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:50:20.388 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:50:23 localhost sshd[299027]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:50:23 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 5 05:50:23 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 5 05:50:23 localhost systemd-logind[760]: New session 66 of user tripleo-admin. Oct 5 05:50:23 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 5 05:50:23 localhost systemd[1]: Starting User Manager for UID 1003... Oct 5 05:50:23 localhost systemd[299031]: Queued start job for default target Main User Target. Oct 5 05:50:23 localhost systemd[299031]: Created slice User Application Slice. Oct 5 05:50:23 localhost systemd[299031]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 05:50:23 localhost systemd[299031]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:50:23 localhost systemd[299031]: Reached target Paths. Oct 5 05:50:23 localhost systemd[299031]: Reached target Timers. Oct 5 05:50:23 localhost systemd[299031]: Starting D-Bus User Message Bus Socket... Oct 5 05:50:23 localhost systemd[299031]: Starting Create User's Volatile Files and Directories... Oct 5 05:50:23 localhost systemd[299031]: Listening on D-Bus User Message Bus Socket. Oct 5 05:50:23 localhost systemd[299031]: Reached target Sockets. Oct 5 05:50:23 localhost systemd[299031]: Finished Create User's Volatile Files and Directories. Oct 5 05:50:23 localhost systemd[299031]: Reached target Basic System. Oct 5 05:50:23 localhost systemd[299031]: Reached target Main User Target. Oct 5 05:50:23 localhost systemd[299031]: Startup finished in 143ms. Oct 5 05:50:23 localhost systemd[1]: Started User Manager for UID 1003. Oct 5 05:50:23 localhost systemd[1]: Started Session 66 of User tripleo-admin. Oct 5 05:50:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:50:24 localhost systemd[1]: tmp-crun.lUK1t3.mount: Deactivated successfully. Oct 5 05:50:24 localhost podman[299174]: 2025-10-05 09:50:24.286020506 +0000 UTC m=+0.094843150 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:50:24 localhost podman[299174]: 2025-10-05 09:50:24.31928295 +0000 UTC m=+0.128105605 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:50:24 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:50:24 localhost python3[299175]: ansible-ansible.builtin.systemd Invoked with name=iptables state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:50:25 localhost python3[299341]: ansible-ansible.builtin.systemd Invoked with name=nftables state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Oct 5 05:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:50:25 localhost systemd[1]: Stopping Netfilter Tables... Oct 5 05:50:25 localhost podman[299343]: 2025-10-05 09:50:25.452118443 +0000 UTC m=+0.071500426 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:50:25 localhost systemd[1]: nftables.service: Deactivated successfully. Oct 5 05:50:25 localhost systemd[1]: Stopped Netfilter Tables. Oct 5 05:50:25 localhost podman[299343]: 2025-10-05 09:50:25.462232717 +0000 UTC m=+0.081614740 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:50:25 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:50:26 localhost podman[248157]: time="2025-10-05T09:50:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:50:26 localhost podman[248157]: @ - - [05/Oct/2025:09:50:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 139979 "" "Go-http-client/1.1" Oct 5 05:50:26 localhost podman[248157]: @ - - [05/Oct/2025:09:50:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17816 "" "Go-http-client/1.1" Oct 5 05:50:26 localhost python3[299510]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/tripleo-rules.nft block=# 100 ceph_alertmanager {'dport': [9093]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard {'dport': [8443]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana {'dport': [3100]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus {'dport': [9092]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw {'dport': ['8080']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon {'dport': [6789, 3300, '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds {'dport': ['6800-7300', '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr {'dport': ['6800-7300', 8444]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs {'dport': ['12049', '2049']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 2049 } ct state new counter accept comment "120 ceph_nfs"#012# 122 ceph rgw {'dport': ['8080', '8080', '9100']}#012add rule inet filter TRIPLEO_INPUT tcp dport { 8080,8080,9100 } ct state new counter accept comment "122 ceph rgw"#012# 123 ceph_dashboard {'dport': [3100, 9090, 9092, 9093, 9094, 9100, 9283]}#012add rule inet filter TRIPLEO_INPUT tcp dport { 3100,9090,9092,9093,9094,9100,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:50:26 localhost systemd-journald[48149]: Field hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 79.3 (264 of 333 items), suggesting rotation. Oct 5 05:50:26 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 05:50:26 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:50:26 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 05:50:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:50:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:50:29 localhost systemd[1]: tmp-crun.YboHRY.mount: Deactivated successfully. Oct 5 05:50:29 localhost podman[299547]: 2025-10-05 09:50:29.914949136 +0000 UTC m=+0.081906529 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:50:29 localhost podman[299547]: 2025-10-05 09:50:29.951171911 +0000 UTC m=+0.118129274 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:50:29 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:50:29 localhost podman[299548]: 2025-10-05 09:50:29.964924045 +0000 UTC m=+0.128316641 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller) Oct 5 05:50:30 localhost podman[299548]: 2025-10-05 09:50:30.065106479 +0000 UTC m=+0.228499125 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:50:30 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:50:37 localhost podman[299825]: Oct 5 05:50:37 localhost podman[299825]: 2025-10-05 09:50:37.782572054 +0000 UTC m=+0.077432496 container create 01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_brattain, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64, version=7, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:50:37 localhost systemd[1]: Started libpod-conmon-01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca.scope. Oct 5 05:50:37 localhost systemd[1]: Started libcrun container. Oct 5 05:50:37 localhost podman[299825]: 2025-10-05 09:50:37.748148479 +0000 UTC m=+0.043008951 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:50:37 localhost podman[299825]: 2025-10-05 09:50:37.848146379 +0000 UTC m=+0.143006841 container init 01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_brattain, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, version=7, name=rhceph, architecture=x86_64, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True) Oct 5 05:50:37 localhost systemd[1]: tmp-crun.KVKTBQ.mount: Deactivated successfully. Oct 5 05:50:37 localhost podman[299825]: 2025-10-05 09:50:37.861556774 +0000 UTC m=+0.156417286 container start 01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_brattain, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, version=7, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, release=553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Oct 5 05:50:37 localhost dazzling_brattain[299840]: 167 167 Oct 5 05:50:37 localhost systemd[1]: libpod-01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca.scope: Deactivated successfully. Oct 5 05:50:37 localhost podman[299825]: 2025-10-05 09:50:37.862087598 +0000 UTC m=+0.156948070 container attach 01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_brattain, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Oct 5 05:50:37 localhost podman[299825]: 2025-10-05 09:50:37.864474993 +0000 UTC m=+0.159335455 container died 01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_brattain, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True) Oct 5 05:50:37 localhost podman[299845]: 2025-10-05 09:50:37.961645065 +0000 UTC m=+0.086217766 container remove 01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_brattain, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Oct 5 05:50:37 localhost systemd[1]: libpod-conmon-01ffb93b7aa06d31e9bc9a1b4808c9ff67df5be1389a08332d9a3ce36a0aa3ca.scope: Deactivated successfully. Oct 5 05:50:38 localhost systemd[1]: Reloading. Oct 5 05:50:38 localhost systemd-rc-local-generator[299886]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:50:38 localhost systemd-sysv-generator[299893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:50:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:50:38 localhost systemd[1]: var-lib-containers-storage-overlay-f81a06960bdc368b5df0c199b822b6bdfb86a68cbbad90896984929922497250-merged.mount: Deactivated successfully. Oct 5 05:50:38 localhost systemd[1]: Reloading. Oct 5 05:50:38 localhost systemd-rc-local-generator[299928]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:50:38 localhost systemd-sysv-generator[299933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:50:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:50:38 localhost systemd[1]: Starting Ceph mds.mds.np0005471152.pozuqw for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:50:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:50:39 localhost podman[299993]: Oct 5 05:50:39 localhost podman[299993]: 2025-10-05 09:50:39.112631911 +0000 UTC m=+0.095094738 container create 5fee68e80a483ad3352889bf49478672e8c7fb827bc478935d9a33014f181dbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471152-pozuqw, version=7, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Oct 5 05:50:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ae420483d912a4e9a68162a4526547a811461e7105b70588f8e7ef9443119f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ae420483d912a4e9a68162a4526547a811461e7105b70588f8e7ef9443119f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ae420483d912a4e9a68162a4526547a811461e7105b70588f8e7ef9443119f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ae420483d912a4e9a68162a4526547a811461e7105b70588f8e7ef9443119f/merged/var/lib/ceph/mds/ceph-mds.np0005471152.pozuqw supports timestamps until 2038 (0x7fffffff) Oct 5 05:50:39 localhost podman[299993]: 2025-10-05 09:50:39.16185294 +0000 UTC m=+0.144315767 container init 5fee68e80a483ad3352889bf49478672e8c7fb827bc478935d9a33014f181dbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471152-pozuqw, io.buildah.version=1.33.12, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, release=553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Oct 5 05:50:39 localhost podman[299993]: 2025-10-05 09:50:39.169128007 +0000 UTC m=+0.151590834 container start 5fee68e80a483ad3352889bf49478672e8c7fb827bc478935d9a33014f181dbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471152-pozuqw, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Oct 5 05:50:39 localhost bash[299993]: 5fee68e80a483ad3352889bf49478672e8c7fb827bc478935d9a33014f181dbd Oct 5 05:50:39 localhost podman[299993]: 2025-10-05 09:50:39.081390801 +0000 UTC m=+0.063853708 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:50:39 localhost systemd[1]: Started Ceph mds.mds.np0005471152.pozuqw for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:50:39 localhost ceph-mds[300011]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:50:39 localhost ceph-mds[300011]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Oct 5 05:50:39 localhost ceph-mds[300011]: main not setting numa affinity Oct 5 05:50:39 localhost ceph-mds[300011]: pidfile_write: ignore empty --pid-file Oct 5 05:50:39 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471152-pozuqw[300007]: starting mds.mds.np0005471152.pozuqw at Oct 5 05:50:39 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Updating MDS map to version 6 from mon.1 Oct 5 05:50:40 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Updating MDS map to version 7 from mon.1 Oct 5 05:50:40 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Monitors have assigned me to become a standby. Oct 5 05:50:42 localhost systemd[1]: session-65.scope: Deactivated successfully. Oct 5 05:50:42 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Oct 5 05:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:50:42 localhost systemd-logind[760]: Removed session 65. Oct 5 05:50:42 localhost systemd[1]: tmp-crun.KpAAUb.mount: Deactivated successfully. Oct 5 05:50:42 localhost podman[300031]: 2025-10-05 09:50:42.778307672 +0000 UTC m=+0.093104574 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:50:42 localhost podman[300031]: 2025-10-05 09:50:42.789075554 +0000 UTC m=+0.103872426 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm) Oct 5 05:50:42 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:50:42 localhost podman[300032]: 2025-10-05 09:50:42.8763907 +0000 UTC m=+0.187552532 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:50:42 localhost podman[300032]: 2025-10-05 09:50:42.88928318 +0000 UTC m=+0.200445002 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:50:42 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:50:44 localhost systemd[1]: tmp-crun.P5X1IE.mount: Deactivated successfully. Oct 5 05:50:44 localhost podman[300127]: 2025-10-05 09:50:44.91584742 +0000 UTC m=+0.084637183 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm) Oct 5 05:50:44 localhost podman[300127]: 2025-10-05 09:50:44.931168637 +0000 UTC m=+0.099958400 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 05:50:44 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:50:45 localhost podman[300218]: 2025-10-05 09:50:45.498657782 +0000 UTC m=+0.091723976 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:50:45 localhost podman[300218]: 2025-10-05 09:50:45.600347828 +0000 UTC m=+0.193413952 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, RELEASE=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, version=7, release=553, com.redhat.component=rhceph-container) Oct 5 05:50:46 localhost openstack_network_exporter[250246]: ERROR 09:50:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:50:46 localhost openstack_network_exporter[250246]: ERROR 09:50:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:50:46 localhost openstack_network_exporter[250246]: Oct 5 05:50:46 localhost openstack_network_exporter[250246]: ERROR 09:50:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:46 localhost openstack_network_exporter[250246]: ERROR 09:50:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:50:46 localhost openstack_network_exporter[250246]: ERROR 09:50:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:50:46 localhost openstack_network_exporter[250246]: Oct 5 05:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:50:50 localhost podman[300342]: 2025-10-05 09:50:50.916255374 +0000 UTC m=+0.085350093 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 05:50:50 localhost podman[300342]: 2025-10-05 09:50:50.946125586 +0000 UTC m=+0.115220315 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:50:50 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:50:54 localhost podman[300360]: 2025-10-05 09:50:54.913878984 +0000 UTC m=+0.082824034 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:50:54 localhost podman[300360]: 2025-10-05 09:50:54.927266867 +0000 UTC m=+0.096211967 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:50:54 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:50:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:50:55 localhost systemd[1]: tmp-crun.G8BTsO.mount: Deactivated successfully. Oct 5 05:50:55 localhost podman[300383]: 2025-10-05 09:50:55.906240734 +0000 UTC m=+0.076986205 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:50:55 localhost podman[300383]: 2025-10-05 09:50:55.944243617 +0000 UTC m=+0.114989048 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd) Oct 5 05:50:55 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:50:56 localhost podman[248157]: time="2025-10-05T09:50:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:50:56 localhost podman[248157]: @ - - [05/Oct/2025:09:50:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142056 "" "Go-http-client/1.1" Oct 5 05:50:56 localhost podman[248157]: @ - - [05/Oct/2025:09:50:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18310 "" "Go-http-client/1.1" Oct 5 05:50:59 localhost nova_compute[297130]: 2025-10-05 09:50:59.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.285 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.286 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.301 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.302 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.302 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.302 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.303 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.752 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:51:00 localhost podman[300426]: 2025-10-05 09:51:00.928252217 +0000 UTC m=+0.087625395 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.940 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.941 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12438MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.941 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:51:00 localhost nova_compute[297130]: 2025-10-05 09:51:00.942 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:51:00 localhost podman[300425]: 2025-10-05 09:51:00.903882584 +0000 UTC m=+0.070261332 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:51:00 localhost podman[300426]: 2025-10-05 09:51:00.96037677 +0000 UTC m=+0.119749968 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:51:00 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:51:00 localhost podman[300425]: 2025-10-05 09:51:00.988357041 +0000 UTC m=+0.154735799 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:51:01 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.014 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.015 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.040 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.476 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.481 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.495 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.497 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:51:01 localhost nova_compute[297130]: 2025-10-05 09:51:01.497 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.484 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.484 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.507 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.507 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.508 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.508 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:02 localhost nova_compute[297130]: 2025-10-05 09:51:02.508 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:51:04 localhost nova_compute[297130]: 2025-10-05 09:51:04.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:51:12 localhost podman[300492]: 2025-10-05 09:51:12.919834379 +0000 UTC m=+0.088251930 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:51:12 localhost podman[300492]: 2025-10-05 09:51:12.930477479 +0000 UTC m=+0.098895020 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Oct 5 05:51:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:51:12 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:51:13 localhost podman[300511]: 2025-10-05 09:51:13.022853512 +0000 UTC m=+0.081376475 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:51:13 localhost podman[300511]: 2025-10-05 09:51:13.031005373 +0000 UTC m=+0.089528286 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:51:13 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:51:15 localhost podman[300535]: 2025-10-05 09:51:15.919867776 +0000 UTC m=+0.086305348 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible) Oct 5 05:51:15 localhost podman[300535]: 2025-10-05 09:51:15.961205521 +0000 UTC m=+0.127643103 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, build-date=2025-08-20T13:12:41, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc.) Oct 5 05:51:15 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:51:16 localhost openstack_network_exporter[250246]: ERROR 09:51:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:16 localhost openstack_network_exporter[250246]: ERROR 09:51:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:16 localhost openstack_network_exporter[250246]: ERROR 09:51:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:51:16 localhost openstack_network_exporter[250246]: ERROR 09:51:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:51:16 localhost openstack_network_exporter[250246]: Oct 5 05:51:16 localhost openstack_network_exporter[250246]: ERROR 09:51:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:51:16 localhost openstack_network_exporter[250246]: Oct 5 05:51:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:51:20.389 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:51:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:51:20.389 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:51:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:51:20.390 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:51:21 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Updating MDS map to version 12 from mon.1 Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.12 handle_mds_map i am now mds.0.12 Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.12 handle_mds_map state change up:standby --> up:replay Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.12 replay_start Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.12 waiting for osdmap 80 (which blocklists prior instance) Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.cache creating system inode with ino:0x100 Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.cache creating system inode with ino:0x1 Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.12 Finished replaying journal Oct 5 05:51:21 localhost ceph-mds[300011]: mds.0.12 making mds journal writeable Oct 5 05:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:51:21 localhost podman[300583]: 2025-10-05 09:51:21.916996951 +0000 UTC m=+0.084373836 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 05:51:21 localhost podman[300583]: 2025-10-05 09:51:21.926212901 +0000 UTC m=+0.093589766 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent) Oct 5 05:51:21 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:51:22 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Updating MDS map to version 13 from mon.1 Oct 5 05:51:22 localhost ceph-mds[300011]: mds.0.12 handle_mds_map i am now mds.0.12 Oct 5 05:51:22 localhost ceph-mds[300011]: mds.0.12 handle_mds_map state change up:replay --> up:reconnect Oct 5 05:51:22 localhost ceph-mds[300011]: mds.0.12 reconnect_start Oct 5 05:51:22 localhost ceph-mds[300011]: mds.0.12 reopen_log Oct 5 05:51:22 localhost ceph-mds[300011]: mds.0.12 reconnect_done Oct 5 05:51:23 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Updating MDS map to version 14 from mon.1 Oct 5 05:51:23 localhost ceph-mds[300011]: mds.0.12 handle_mds_map i am now mds.0.12 Oct 5 05:51:23 localhost ceph-mds[300011]: mds.0.12 handle_mds_map state change up:reconnect --> up:rejoin Oct 5 05:51:23 localhost ceph-mds[300011]: mds.0.12 rejoin_start Oct 5 05:51:23 localhost ceph-mds[300011]: mds.0.12 rejoin_joint_start Oct 5 05:51:23 localhost ceph-mds[300011]: mds.0.12 rejoin_done Oct 5 05:51:24 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw Updating MDS map to version 15 from mon.1 Oct 5 05:51:24 localhost ceph-mds[300011]: mds.0.12 handle_mds_map i am now mds.0.12 Oct 5 05:51:24 localhost ceph-mds[300011]: mds.0.12 handle_mds_map state change up:rejoin --> up:active Oct 5 05:51:24 localhost ceph-mds[300011]: mds.0.12 recovery_done -- successful recovery! Oct 5 05:51:24 localhost ceph-mds[300011]: mds.0.12 active_start Oct 5 05:51:24 localhost ceph-mds[300011]: mds.0.12 cluster recovered. Oct 5 05:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:51:25 localhost ceph-mds[300011]: mds.pinger is_rank_lagging: rank=0 was never sent ping request. Oct 5 05:51:25 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mds-mds-np0005471152-pozuqw[300007]: 2025-10-05T09:51:25.092+0000 7f6117f5d640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request. Oct 5 05:51:25 localhost podman[300605]: 2025-10-05 09:51:25.134873172 +0000 UTC m=+0.076395599 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:51:25 localhost podman[300605]: 2025-10-05 09:51:25.14616634 +0000 UTC m=+0.087688777 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:51:25 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:51:26 localhost podman[248157]: time="2025-10-05T09:51:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:51:26 localhost podman[248157]: @ - - [05/Oct/2025:09:51:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142056 "" "Go-http-client/1.1" Oct 5 05:51:26 localhost podman[248157]: @ - - [05/Oct/2025:09:51:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18316 "" "Go-http-client/1.1" Oct 5 05:51:26 localhost systemd[1]: session-66.scope: Deactivated successfully. Oct 5 05:51:26 localhost systemd[1]: session-66.scope: Consumed 1.941s CPU time. Oct 5 05:51:26 localhost systemd-logind[760]: Session 66 logged out. Waiting for processes to exit. Oct 5 05:51:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:51:26 localhost systemd-logind[760]: Removed session 66. Oct 5 05:51:26 localhost podman[300629]: 2025-10-05 09:51:26.347263148 +0000 UTC m=+0.075174816 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:51:26 localhost podman[300629]: 2025-10-05 09:51:26.359256714 +0000 UTC m=+0.087168422 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd) Oct 5 05:51:26 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:51:31 localhost podman[300648]: 2025-10-05 09:51:31.92264797 +0000 UTC m=+0.082283318 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.build-date=20251001) Oct 5 05:51:31 localhost podman[300648]: 2025-10-05 09:51:31.959201545 +0000 UTC m=+0.118836913 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.vendor=CentOS) Oct 5 05:51:31 localhost systemd[1]: tmp-crun.ZBDkmN.mount: Deactivated successfully. Oct 5 05:51:31 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:51:31 localhost podman[300660]: 2025-10-05 09:51:31.976963718 +0000 UTC m=+0.086930505 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 05:51:32 localhost podman[300660]: 2025-10-05 09:51:32.042193382 +0000 UTC m=+0.152160169 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:51:32 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:51:36 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 5 05:51:36 localhost systemd[299031]: Activating special unit Exit the Session... Oct 5 05:51:36 localhost systemd[299031]: Stopped target Main User Target. Oct 5 05:51:36 localhost systemd[299031]: Stopped target Basic System. Oct 5 05:51:36 localhost systemd[299031]: Stopped target Paths. Oct 5 05:51:36 localhost systemd[299031]: Stopped target Sockets. Oct 5 05:51:36 localhost systemd[299031]: Stopped target Timers. Oct 5 05:51:36 localhost systemd[299031]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 05:51:36 localhost systemd[299031]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:51:36 localhost systemd[299031]: Closed D-Bus User Message Bus Socket. Oct 5 05:51:36 localhost systemd[299031]: Stopped Create User's Volatile Files and Directories. Oct 5 05:51:36 localhost systemd[299031]: Removed slice User Application Slice. Oct 5 05:51:36 localhost systemd[299031]: Reached target Shutdown. Oct 5 05:51:36 localhost systemd[299031]: Finished Exit the Session. Oct 5 05:51:36 localhost systemd[299031]: Reached target Exit the Session. Oct 5 05:51:36 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 5 05:51:36 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 5 05:51:36 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 5 05:51:36 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 5 05:51:36 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 5 05:51:36 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 5 05:51:36 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 5 05:51:36 localhost systemd[1]: user-1003.slice: Consumed 2.341s CPU time. Oct 5 05:51:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:51:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:51:43 localhost systemd[1]: tmp-crun.S7rOUk.mount: Deactivated successfully. Oct 5 05:51:43 localhost podman[300815]: 2025-10-05 09:51:43.907724929 +0000 UTC m=+0.073517091 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:51:43 localhost podman[300815]: 2025-10-05 09:51:43.948273451 +0000 UTC m=+0.114065663 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:51:43 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:51:43 localhost podman[300814]: 2025-10-05 09:51:43.967651248 +0000 UTC m=+0.133464200 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:51:43 localhost podman[300814]: 2025-10-05 09:51:43.980546169 +0000 UTC m=+0.146359111 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001) Oct 5 05:51:43 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:51:46 localhost openstack_network_exporter[250246]: ERROR 09:51:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:46 localhost openstack_network_exporter[250246]: ERROR 09:51:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:51:46 localhost openstack_network_exporter[250246]: ERROR 09:51:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:51:46 localhost openstack_network_exporter[250246]: ERROR 09:51:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:51:46 localhost openstack_network_exporter[250246]: Oct 5 05:51:46 localhost openstack_network_exporter[250246]: ERROR 09:51:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:51:46 localhost openstack_network_exporter[250246]: Oct 5 05:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:51:46 localhost podman[300857]: 2025-10-05 09:51:46.916079591 +0000 UTC m=+0.083624415 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, name=ubi9-minimal) Oct 5 05:51:46 localhost podman[300857]: 2025-10-05 09:51:46.925135018 +0000 UTC m=+0.092679802 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:51:46 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:51:52 localhost systemd[1]: tmp-crun.p2zQ7X.mount: Deactivated successfully. Oct 5 05:51:52 localhost podman[300877]: 2025-10-05 09:51:52.921569101 +0000 UTC m=+0.081135058 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:51:52 localhost podman[300877]: 2025-10-05 09:51:52.925865178 +0000 UTC m=+0.085431105 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:51:52 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:51:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:51:55 localhost podman[300895]: 2025-10-05 09:51:55.910872436 +0000 UTC m=+0.078881826 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:51:55 localhost podman[300895]: 2025-10-05 09:51:55.921422773 +0000 UTC m=+0.089432193 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:51:55 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:51:56 localhost podman[248157]: time="2025-10-05T09:51:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:51:56 localhost podman[248157]: @ - - [05/Oct/2025:09:51:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142056 "" "Go-http-client/1.1" Oct 5 05:51:56 localhost podman[248157]: @ - - [05/Oct/2025:09:51:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18323 "" "Go-http-client/1.1" Oct 5 05:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:51:56 localhost podman[300918]: 2025-10-05 09:51:56.906108646 +0000 UTC m=+0.075076213 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:51:56 localhost podman[300918]: 2025-10-05 09:51:56.940793579 +0000 UTC m=+0.109761196 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true) Oct 5 05:51:56 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:51:58 localhost nova_compute[297130]: 2025-10-05 09:51:58.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:58 localhost nova_compute[297130]: 2025-10-05 09:51:58.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:51:58 localhost nova_compute[297130]: 2025-10-05 09:51:58.289 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:51:58 localhost nova_compute[297130]: 2025-10-05 09:51:58.291 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:58 localhost nova_compute[297130]: 2025-10-05 09:51:58.291 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:51:58 localhost nova_compute[297130]: 2025-10-05 09:51:58.304 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:51:59 localhost nova_compute[297130]: 2025-10-05 09:51:59.316 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:01 localhost nova_compute[297130]: 2025-10-05 09:52:01.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:01 localhost nova_compute[297130]: 2025-10-05 09:52:01.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.306 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.307 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.307 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.326 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.326 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.326 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.327 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.327 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:52:02 localhost nova_compute[297130]: 2025-10-05 09:52:02.779 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:52:02 localhost podman[300961]: 2025-10-05 09:52:02.909226924 +0000 UTC m=+0.077700585 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 05:52:02 localhost podman[300962]: 2025-10-05 09:52:02.976547335 +0000 UTC m=+0.139069794 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:52:02 localhost podman[300961]: 2025-10-05 09:52:02.994760279 +0000 UTC m=+0.163233950 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid) Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.002 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.006 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=12444MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.007 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.008 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:52:03 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:52:03 localhost podman[300962]: 2025-10-05 09:52:03.017166169 +0000 UTC m=+0.179688638 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:52:03 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.198 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.199 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.411 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.435 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.436 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.450 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.481 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.496 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.972 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:52:03 localhost nova_compute[297130]: 2025-10-05 09:52:03.978 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:52:04 localhost nova_compute[297130]: 2025-10-05 09:52:03.999 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:52:04 localhost nova_compute[297130]: 2025-10-05 09:52:04.001 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:52:04 localhost nova_compute[297130]: 2025-10-05 09:52:04.001 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:52:04 localhost nova_compute[297130]: 2025-10-05 09:52:04.967 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:04 localhost nova_compute[297130]: 2025-10-05 09:52:04.968 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:05 localhost nova_compute[297130]: 2025-10-05 09:52:05.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:52:12 localhost podman[301161]: Oct 5 05:52:12 localhost podman[301161]: 2025-10-05 09:52:12.684352833 +0000 UTC m=+0.076627045 container create ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_volhard, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, release=553, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.openshift.expose-services=) Oct 5 05:52:12 localhost systemd[1]: Started libpod-conmon-ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399.scope. Oct 5 05:52:12 localhost systemd[1]: Started libcrun container. Oct 5 05:52:12 localhost podman[301161]: 2025-10-05 09:52:12.65260863 +0000 UTC m=+0.044882862 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:12 localhost podman[301161]: 2025-10-05 09:52:12.761609245 +0000 UTC m=+0.153883457 container init ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_volhard, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, release=553, com.redhat.component=rhceph-container, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:52:12 localhost podman[301161]: 2025-10-05 09:52:12.772801979 +0000 UTC m=+0.165076231 container start ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_volhard, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, release=553, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 5 05:52:12 localhost podman[301161]: 2025-10-05 09:52:12.772996134 +0000 UTC m=+0.165270336 container attach ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_volhard, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:52:12 localhost dreamy_volhard[301176]: 167 167 Oct 5 05:52:12 localhost systemd[1]: libpod-ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399.scope: Deactivated successfully. Oct 5 05:52:12 localhost podman[301161]: 2025-10-05 09:52:12.777393583 +0000 UTC m=+0.169667815 container died ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_volhard, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:52:12 localhost podman[301181]: 2025-10-05 09:52:12.879050429 +0000 UTC m=+0.088654493 container remove ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_volhard, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, architecture=x86_64, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:52:12 localhost systemd[1]: libpod-conmon-ac8b8dd0c962300f3972c5f763474bb709300dc482b8ec1f72287c69fba3d399.scope: Deactivated successfully. Oct 5 05:52:12 localhost systemd[1]: Reloading. Oct 5 05:52:13 localhost systemd-sysv-generator[301229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:13 localhost systemd-rc-local-generator[301225]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:13 localhost systemd[1]: var-lib-containers-storage-overlay-aa5559dc02489a1394c386a8f2b1bea8766de204212d539139435a4bddf75ed7-merged.mount: Deactivated successfully. Oct 5 05:52:13 localhost systemd[1]: Reloading. Oct 5 05:52:13 localhost systemd-rc-local-generator[301263]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:13 localhost systemd-sysv-generator[301267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:13 localhost systemd[1]: Starting Ceph mgr.np0005471152.kbhlus for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:52:14 localhost podman[301329]: Oct 5 05:52:14 localhost podman[301329]: 2025-10-05 09:52:14.011737696 +0000 UTC m=+0.067272371 container create 64afb48193d30b5a86479b2d82720e6e7a84ea8e119c2fc9ba94049494721ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , RELEASE=main, release=553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:52:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a90eb03b0d5a7ac42594835bf8594bf73acedf2788a06b829a10344cb2301b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a90eb03b0d5a7ac42594835bf8594bf73acedf2788a06b829a10344cb2301b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a90eb03b0d5a7ac42594835bf8594bf73acedf2788a06b829a10344cb2301b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a90eb03b0d5a7ac42594835bf8594bf73acedf2788a06b829a10344cb2301b/merged/var/lib/ceph/mgr/ceph-np0005471152.kbhlus supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:14 localhost podman[301329]: 2025-10-05 09:52:14.058387195 +0000 UTC m=+0.113921870 container init 64afb48193d30b5a86479b2d82720e6e7a84ea8e119c2fc9ba94049494721ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus, build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True, RELEASE=main, version=7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=553, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12) Oct 5 05:52:14 localhost systemd[1]: tmp-crun.Ky5lOI.mount: Deactivated successfully. Oct 5 05:52:14 localhost podman[301329]: 2025-10-05 09:52:14.070085943 +0000 UTC m=+0.125620638 container start 64afb48193d30b5a86479b2d82720e6e7a84ea8e119c2fc9ba94049494721ca3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main) Oct 5 05:52:14 localhost podman[301329]: 2025-10-05 09:52:13.980734893 +0000 UTC m=+0.036269618 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:14 localhost bash[301329]: 64afb48193d30b5a86479b2d82720e6e7a84ea8e119c2fc9ba94049494721ca3 Oct 5 05:52:14 localhost systemd[1]: Started Ceph mgr.np0005471152.kbhlus for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:52:14 localhost podman[301347]: 2025-10-05 09:52:14.114935743 +0000 UTC m=+0.063787816 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:52:14 localhost ceph-mgr[301363]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:52:14 localhost ceph-mgr[301363]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Oct 5 05:52:14 localhost ceph-mgr[301363]: pidfile_write: ignore empty --pid-file Oct 5 05:52:14 localhost podman[301347]: 2025-10-05 09:52:14.12403122 +0000 UTC m=+0.072883293 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:52:14 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:52:14 localhost ceph-mgr[301363]: mgr[py] Loading python module 'alerts' Oct 5 05:52:14 localhost podman[301344]: 2025-10-05 09:52:14.181532444 +0000 UTC m=+0.132827644 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 05:52:14 localhost podman[301344]: 2025-10-05 09:52:14.187915588 +0000 UTC m=+0.139210768 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:52:14 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:52:14 localhost ceph-mgr[301363]: mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301363]: mgr[py] Loading python module 'balancer' Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:14.237+0000 7f7092f23140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301363]: mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301363]: mgr[py] Loading python module 'cephadm' Oct 5 05:52:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:14.301+0000 7f7092f23140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:52:14 localhost ceph-mgr[301363]: mgr[py] Loading python module 'crash' Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'dashboard' Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:15.011+0000 7f7092f23140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'devicehealth' Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'diskprediction_local' Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:15.576+0000 7f7092f23140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost systemd[1]: tmp-crun.5D9ifr.mount: Deactivated successfully. Oct 5 05:52:15 localhost podman[301546]: 2025-10-05 09:52:15.620264996 +0000 UTC m=+0.102141659 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-type=git, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: from numpy import show_config as show_numpy_config Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:15.709+0000 7f7092f23140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'influx' Oct 5 05:52:15 localhost podman[301546]: 2025-10-05 09:52:15.716988267 +0000 UTC m=+0.198864960 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'insights' Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:15.767+0000 7f7092f23140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'iostat' Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:52:15 localhost ceph-mgr[301363]: mgr[py] Loading python module 'k8sevents' Oct 5 05:52:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:15.882+0000 7f7092f23140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'localpool' Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'mds_autoscaler' Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'mirroring' Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'nfs' Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'orchestrator' Oct 5 05:52:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:16.625+0000 7f7092f23140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost openstack_network_exporter[250246]: ERROR 09:52:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:16 localhost openstack_network_exporter[250246]: ERROR 09:52:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:16 localhost openstack_network_exporter[250246]: ERROR 09:52:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:52:16 localhost openstack_network_exporter[250246]: ERROR 09:52:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:52:16 localhost openstack_network_exporter[250246]: Oct 5 05:52:16 localhost openstack_network_exporter[250246]: ERROR 09:52:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:52:16 localhost openstack_network_exporter[250246]: Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'osd_perf_query' Oct 5 05:52:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:16.771+0000 7f7092f23140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'osd_support' Oct 5 05:52:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:16.835+0000 7f7092f23140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'pg_autoscaler' Oct 5 05:52:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:16.892+0000 7f7092f23140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:52:16 localhost ceph-mgr[301363]: mgr[py] Loading python module 'progress' Oct 5 05:52:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:16.963+0000 7f7092f23140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Loading python module 'prometheus' Oct 5 05:52:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:17.022+0000 7f7092f23140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Loading python module 'rbd_support' Oct 5 05:52:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:17.310+0000 7f7092f23140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Loading python module 'restful' Oct 5 05:52:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:17.389+0000 7f7092f23140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:52:17 localhost podman[301681]: 2025-10-05 09:52:17.552630693 +0000 UTC m=+0.092968199 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7) Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Loading python module 'rgw' Oct 5 05:52:17 localhost podman[301681]: 2025-10-05 09:52:17.59515795 +0000 UTC m=+0.135495436 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 05:52:17 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:52:17 localhost ceph-mgr[301363]: mgr[py] Loading python module 'rook' Oct 5 05:52:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:17.714+0000 7f7092f23140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'selftest' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.133+0000 7f7092f23140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'snap_schedule' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.193+0000 7f7092f23140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'stats' Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'status' Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'telegraf' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.379+0000 7f7092f23140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'telemetry' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.436+0000 7f7092f23140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'test_orchestrator' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.563+0000 7f7092f23140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'volumes' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.705+0000 7f7092f23140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Loading python module 'zabbix' Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.887+0000 7f7092f23140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:52:18.945+0000 7f7092f23140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:52:18 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef1391e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 5 05:52:18 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3986210712 Oct 5 05:52:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:52:20.390 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:52:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:52:20.391 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:52:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:52:20.391 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:52:23 localhost podman[301754]: 2025-10-05 09:52:23.196274492 +0000 UTC m=+0.060846735 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 05:52:23 localhost podman[301754]: 2025-10-05 09:52:23.205172485 +0000 UTC m=+0.069744718 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:52:23 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:52:26 localhost podman[248157]: time="2025-10-05T09:52:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:52:26 localhost podman[248157]: @ - - [05/Oct/2025:09:52:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144122 "" "Go-http-client/1.1" Oct 5 05:52:26 localhost podman[248157]: @ - - [05/Oct/2025:09:52:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18806 "" "Go-http-client/1.1" Oct 5 05:52:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:52:26 localhost podman[301791]: 2025-10-05 09:52:26.401350227 +0000 UTC m=+0.084283054 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:52:26 localhost podman[301791]: 2025-10-05 09:52:26.413253591 +0000 UTC m=+0.096186408 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:52:26 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:52:27 localhost systemd[1]: tmp-crun.TLUeHk.mount: Deactivated successfully. Oct 5 05:52:27 localhost podman[301992]: 2025-10-05 09:52:27.130611372 +0000 UTC m=+0.077859289 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd) Oct 5 05:52:27 localhost podman[301992]: 2025-10-05 09:52:27.145071615 +0000 UTC m=+0.092319522 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:52:27 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:52:29 localhost podman[302554]: Oct 5 05:52:29 localhost podman[302554]: 2025-10-05 09:52:29.627609346 +0000 UTC m=+0.077624801 container create 6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_visvesvaraya, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container) Oct 5 05:52:29 localhost systemd[1]: Started libpod-conmon-6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b.scope. Oct 5 05:52:29 localhost systemd[1]: Started libcrun container. Oct 5 05:52:29 localhost podman[302554]: 2025-10-05 09:52:29.694470315 +0000 UTC m=+0.144485770 container init 6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_visvesvaraya, name=rhceph, ceph=True, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph) Oct 5 05:52:29 localhost podman[302554]: 2025-10-05 09:52:29.596133221 +0000 UTC m=+0.046148726 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:29 localhost podman[302554]: 2025-10-05 09:52:29.7038539 +0000 UTC m=+0.153869365 container start 6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_visvesvaraya, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_CLEAN=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container) Oct 5 05:52:29 localhost podman[302554]: 2025-10-05 09:52:29.704083727 +0000 UTC m=+0.154099182 container attach 6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_visvesvaraya, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True) Oct 5 05:52:29 localhost reverent_visvesvaraya[302569]: 167 167 Oct 5 05:52:29 localhost systemd[1]: libpod-6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b.scope: Deactivated successfully. Oct 5 05:52:29 localhost podman[302554]: 2025-10-05 09:52:29.70823383 +0000 UTC m=+0.158249305 container died 6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_visvesvaraya, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Oct 5 05:52:29 localhost podman[302574]: 2025-10-05 09:52:29.810213833 +0000 UTC m=+0.087228673 container remove 6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_visvesvaraya, ceph=True, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main) Oct 5 05:52:29 localhost systemd[1]: libpod-conmon-6d5682547fe93386c427dbfcd9bada0c9fa2e4515b2dcfcc610f9ecca193668b.scope: Deactivated successfully. Oct 5 05:52:29 localhost podman[302590]: Oct 5 05:52:29 localhost podman[302590]: 2025-10-05 09:52:29.925096438 +0000 UTC m=+0.077190510 container create 81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_meitner, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main) Oct 5 05:52:29 localhost systemd[1]: Started libpod-conmon-81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a.scope. Oct 5 05:52:29 localhost systemd[1]: Started libcrun container. Oct 5 05:52:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88cd50730bd48a3245a6335c16e525d11419b4ca90eda2caba7e4bf93a09c878/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88cd50730bd48a3245a6335c16e525d11419b4ca90eda2caba7e4bf93a09c878/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88cd50730bd48a3245a6335c16e525d11419b4ca90eda2caba7e4bf93a09c878/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88cd50730bd48a3245a6335c16e525d11419b4ca90eda2caba7e4bf93a09c878/merged/var/lib/ceph/mon/ceph-np0005471152 supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:29 localhost podman[302590]: 2025-10-05 09:52:29.988847282 +0000 UTC m=+0.140941364 container init 81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_meitner, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Oct 5 05:52:29 localhost podman[302590]: 2025-10-05 09:52:29.895475982 +0000 UTC m=+0.047570094 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:29 localhost podman[302590]: 2025-10-05 09:52:29.997159188 +0000 UTC m=+0.149253230 container start 81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_meitner, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:52:29 localhost podman[302590]: 2025-10-05 09:52:29.997329472 +0000 UTC m=+0.149423554 container attach 81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_meitner, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:52:30 localhost systemd[1]: libpod-81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a.scope: Deactivated successfully. Oct 5 05:52:30 localhost podman[302590]: 2025-10-05 09:52:30.093724515 +0000 UTC m=+0.245818627 container died 81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_meitner, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.expose-services=) Oct 5 05:52:30 localhost podman[302632]: 2025-10-05 09:52:30.184395281 +0000 UTC m=+0.080680626 container remove 81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_meitner, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=553, GIT_BRANCH=main) Oct 5 05:52:30 localhost systemd[1]: libpod-conmon-81216709f14560ae2fce10e64638a2b0f76b4188e9dec13db0d9a16eeff4f80a.scope: Deactivated successfully. Oct 5 05:52:30 localhost systemd[1]: Reloading. Oct 5 05:52:30 localhost systemd-rc-local-generator[302673]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:30 localhost systemd-sysv-generator[302677]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:30 localhost systemd[1]: var-lib-containers-storage-overlay-f4556838bbd75b4e3e1f4c4320c4c5c500ed75035cd8f2d32d4584fa221482a1-merged.mount: Deactivated successfully. Oct 5 05:52:30 localhost systemd[1]: Reloading. Oct 5 05:52:30 localhost systemd-sysv-generator[302716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:52:30 localhost systemd-rc-local-generator[302710]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:52:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:52:30 localhost systemd[1]: Starting Ceph mon.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:52:31 localhost podman[302775]: Oct 5 05:52:31 localhost podman[302775]: 2025-10-05 09:52:31.281113389 +0000 UTC m=+0.077678483 container create 3155c6e2151277c3bdbfc98ee729963a9a26ec3b0c5c1f0f486450354e49ff99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Oct 5 05:52:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76f6cdeaf6d3bb2c079cf60de79aa0601066cfa0cab34e002cc066d9ba465d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76f6cdeaf6d3bb2c079cf60de79aa0601066cfa0cab34e002cc066d9ba465d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76f6cdeaf6d3bb2c079cf60de79aa0601066cfa0cab34e002cc066d9ba465d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76f6cdeaf6d3bb2c079cf60de79aa0601066cfa0cab34e002cc066d9ba465d0/merged/var/lib/ceph/mon/ceph-np0005471152 supports timestamps until 2038 (0x7fffffff) Oct 5 05:52:31 localhost podman[302775]: 2025-10-05 09:52:31.327334357 +0000 UTC m=+0.123899461 container init 3155c6e2151277c3bdbfc98ee729963a9a26ec3b0c5c1f0f486450354e49ff99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Oct 5 05:52:31 localhost podman[302775]: 2025-10-05 09:52:31.33552282 +0000 UTC m=+0.132087924 container start 3155c6e2151277c3bdbfc98ee729963a9a26ec3b0c5c1f0f486450354e49ff99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:52:31 localhost bash[302775]: 3155c6e2151277c3bdbfc98ee729963a9a26ec3b0c5c1f0f486450354e49ff99 Oct 5 05:52:31 localhost podman[302775]: 2025-10-05 09:52:31.249903361 +0000 UTC m=+0.046468465 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:52:31 localhost systemd[1]: Started Ceph mon.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:52:31 localhost ceph-mon[302793]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:52:31 localhost ceph-mon[302793]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Oct 5 05:52:31 localhost ceph-mon[302793]: pidfile_write: ignore empty --pid-file Oct 5 05:52:31 localhost ceph-mon[302793]: load: jerasure load: lrc Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: RocksDB version: 7.9.2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Git sha 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: DB SUMMARY Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: DB Session ID: 9CM0VQKEVS9AVS76DTPQ Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: CURRENT file: CURRENT Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: IDENTITY file: IDENTITY Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005471152/store.db dir, Total Num: 0, files: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005471152/store.db: 000004.log size: 761 ; Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.error_if_exists: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.create_if_missing: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.paranoid_checks: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.env: 0x5647b366a9e0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.fs: PosixFileSystem Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.info_log: 0x5647b5442d20 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.statistics: (nil) Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.use_fsync: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_log_file_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.allow_fallocate: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.use_direct_reads: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.create_missing_column_families: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.db_log_dir: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.wal_dir: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.advise_random_on_open: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.write_buffer_manager: 0x5647b5453540 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.rate_limiter: (nil) Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.unordered_write: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.row_cache: None Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.wal_filter: None Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.two_write_queues: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.manual_wal_flush: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.wal_compression: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.atomic_flush: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.log_readahead_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.db_host_id: __hostname__ Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_background_jobs: 2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_background_compactions: -1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_subcompactions: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_total_wal_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_open_files: -1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bytes_per_sync: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_readahead_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_background_flushes: -1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Compression algorithms supported: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kZSTD supported: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kXpressCompression supported: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kZlibCompression supported: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005471152/store.db/MANIFEST-000005 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.merge_operator: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_filter: None Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_filter_factory: None Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.sst_partitioner_factory: None Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5647b5442980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5647b543f350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.write_buffer_size: 33554432 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_write_buffer_number: 2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression: NoCompression Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression: Disabled Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.prefix_extractor: nullptr Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.num_levels: 7 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.level: 32767 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.enabled: false Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_base: 268435456 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.arena_block_size: 1048576 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.table_properties_collectors: Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.inplace_update_support: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.bloom_locality: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.max_successive_merges: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.force_consistency_checks: 1 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.ttl: 2592000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.enable_blob_files: false Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.min_blob_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.blob_file_size: 268435456 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005471152/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0f9cfb4a-c800-498a-8c29-7c6387860712 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657951386705, "job": 1, "event": "recovery_started", "wal_files": [4]} Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657951388902, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657951389010, "job": 1, "event": "recovery_finished"} Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5647b5466e00 Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: DB pointer 0x5647b555c000 Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152 does not exist in monmap, will attempt to join an existing cluster Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:52:31 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5647b543f350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 05:52:31 localhost ceph-mon[302793]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] Oct 5 05:52:31 localhost ceph-mon[302793]: starting mon.np0005471152 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005471152 fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(???) e0 preinit fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing) e3 sync_obtain_latest_monmap Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3 Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).mds e16 new map Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-05T08:04:17.819317+0000#012modified#0112025-10-05T09:51:24.604984+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01180#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26863}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26863 members: 26863#012[mds.mds.np0005471152.pozuqw{0:26863} state up:active seq 14 addr [v2:172.18.0.108:6808/114949388,v1:172.18.0.108:6809/114949388] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005471151.uyxcpj{-1:17211} state up:standby seq 1 addr [v2:172.18.0.107:6808/3905827397,v1:172.18.0.107:6809/3905827397] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005471150.bsiqok{-1:17217} state up:standby seq 1 addr [v2:172.18.0.106:6808/1854153836,v1:172.18.0.106:6809/1854153836] compat {c=[1],r=[1],i=[17ff]}] Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).osd e81 crush map has features 3314933000854323200, adjusting msgr requires Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).osd e81 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).osd e81 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).osd e81 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:52:31 localhost ceph-mon[302793]: Removing key for mds.mds.np0005471148.dhrare Oct 5 05:52:31 localhost ceph-mon[302793]: Removing daemon mds.mds.np0005471147.whcunt from np0005471147.localdomain -- ports [] Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005471147.whcunt"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005471147.whcunt"}]': finished Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Removing key for mds.mds.np0005471147.whcunt Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mgr to host np0005471150.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mgr to host np0005471151.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mgr to host np0005471152.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 5 05:52:31 localhost ceph-mon[302793]: Saving service mgr spec with placement label:mgr Oct 5 05:52:31 localhost ceph-mon[302793]: Deploying daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Deploying daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mon to host np0005471146.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Oct 5 05:52:31 localhost ceph-mon[302793]: Added label _admin to host np0005471146.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: Deploying daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mon to host np0005471147.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label _admin to host np0005471147.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mon to host np0005471148.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label _admin to host np0005471148.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mon to host np0005471150.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label _admin to host np0005471150.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mon to host np0005471151.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label _admin to host np0005471151.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:31 localhost ceph-mon[302793]: Added label mon to host np0005471152.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Added label _admin to host np0005471152.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:31 localhost ceph-mon[302793]: Saving service mon spec with placement label:mon Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:52:31 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:31 localhost ceph-mon[302793]: Deploying daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:52:31 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:31 localhost ceph-mon[302793]: mon.np0005471152@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Oct 5 05:52:31 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef1391e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 5 05:52:33 localhost ceph-mon[302793]: mon.np0005471152@-1(probing) e4 my rank is now 3 (was -1) Oct 5 05:52:33 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:52:33 localhost ceph-mon[302793]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 Oct 5 05:52:33 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:52:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:52:33 localhost podman[302832]: 2025-10-05 09:52:33.916292584 +0000 UTC m=+0.084407047 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3) Oct 5 05:52:33 localhost podman[302832]: 2025-10-05 09:52:33.930186261 +0000 UTC m=+0.098300724 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:52:33 localhost podman[302833]: 2025-10-05 09:52:33.967843856 +0000 UTC m=+0.133363708 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 05:52:34 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:52:34 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Oct 5 05:52:34 localhost podman[302833]: 2025-10-05 09:52:34.071137105 +0000 UTC m=+0.236656997 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:52:34 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:52:34 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Oct 5 05:52:35 localhost ceph-mds[300011]: mds.beacon.mds.np0005471152.pozuqw missed beacon ack from the monitors Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:36 localhost ceph-mon[302793]: mgrc update_daemon_metadata mon.np0005471152 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005471152.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005471152.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Oct 5 05:52:36 localhost ceph-mon[302793]: Deploying daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471146 calling monitor election Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471147 calling monitor election Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:52:36 localhost ceph-mon[302793]: mon.np0005471146 is new leader, mons np0005471146,np0005471148,np0005471147,np0005471152 in quorum (ranks 0,1,2,3) Oct 5 05:52:36 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:52:36 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:36 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:37 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:37 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:37 localhost ceph-mon[302793]: Deploying daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:52:38 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Oct 5 05:52:38 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef138f20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 5 05:52:38 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:52:38 localhost ceph-mon[302793]: paxos.3).electionLogic(18) init, last seen epoch 18 Oct 5 05:52:38 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.880 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:52:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:52:39 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Oct 5 05:52:39 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Oct 5 05:52:39 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Oct 5 05:52:41 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Oct 5 05:52:43 localhost ceph-mon[302793]: paxos.3).electionLogic(19) init, last seen epoch 19, mid-election, bumping Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471146 calling monitor election Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471147 calling monitor election Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471146 is new leader, mons np0005471146,np0005471148,np0005471147,np0005471152,np0005471151 in quorum (ranks 0,1,2,3,4) Oct 5 05:52:43 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:52:43 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:43 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Oct 5 05:52:43 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef139600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 5 05:52:43 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:52:43 localhost ceph-mon[302793]: paxos.3).electionLogic(22) init, last seen epoch 22 Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:43 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:52:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:52:44 localhost podman[302912]: 2025-10-05 09:52:44.285034599 +0000 UTC m=+0.092960459 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:52:44 localhost podman[302912]: 2025-10-05 09:52:44.302445363 +0000 UTC m=+0.110371273 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:52:44 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:52:44 localhost systemd[1]: tmp-crun.4D7cRv.mount: Deactivated successfully. Oct 5 05:52:44 localhost podman[302947]: 2025-10-05 09:52:44.404757706 +0000 UTC m=+0.118018521 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Oct 5 05:52:44 localhost podman[302947]: 2025-10-05 09:52:44.419287331 +0000 UTC m=+0.132548186 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 05:52:44 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:52:45 localhost podman[303044]: 2025-10-05 09:52:45.220195444 +0000 UTC m=+0.098284054 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, release=553, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:52:45 localhost systemd[1]: tmp-crun.pJHCUn.mount: Deactivated successfully. Oct 5 05:52:45 localhost podman[303044]: 2025-10-05 09:52:45.351451305 +0000 UTC m=+0.229539955 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 5 05:52:46 localhost openstack_network_exporter[250246]: ERROR 09:52:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:52:46 localhost openstack_network_exporter[250246]: ERROR 09:52:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:46 localhost openstack_network_exporter[250246]: ERROR 09:52:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:52:46 localhost openstack_network_exporter[250246]: ERROR 09:52:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:52:46 localhost openstack_network_exporter[250246]: Oct 5 05:52:46 localhost openstack_network_exporter[250246]: ERROR 09:52:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:52:46 localhost openstack_network_exporter[250246]: Oct 5 05:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:52:47 localhost podman[303164]: 2025-10-05 09:52:47.927161071 +0000 UTC m=+0.085067096 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9) Oct 5 05:52:47 localhost podman[303164]: 2025-10-05 09:52:47.965466912 +0000 UTC m=+0.123372927 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public) Oct 5 05:52:47 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471152@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e6 handle_auth_request failed to assign global_id Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471146 calling monitor election Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471147 calling monitor election Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:52:48 localhost ceph-mon[302793]: mon.np0005471146 is new leader, mons np0005471146,np0005471148,np0005471147,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3,4,5) Oct 5 05:52:48 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:52:48 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:48 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:49 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:50 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:52:50 localhost ceph-mon[302793]: Updating np0005471146.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:50 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: Updating np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:52 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:53 localhost ceph-mon[302793]: Reconfiguring mon.np0005471146 (monmap changed)... Oct 5 05:52:53 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:53 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471146 on np0005471146.localdomain Oct 5 05:52:53 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:52:53 localhost podman[303591]: 2025-10-05 09:52:53.913533271 +0000 UTC m=+0.080888040 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:52:53 localhost podman[303591]: 2025-10-05 09:52:53.944916516 +0000 UTC m=+0.112271085 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:52:53 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:52:54 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471146.xqzesq (monmap changed)... Oct 5 05:52:54 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471146.xqzesq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:54 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471146.xqzesq on np0005471146.localdomain Oct 5 05:52:54 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:54 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471146.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:52:55 localhost ceph-mon[302793]: Reconfiguring crash.np0005471146 (monmap changed)... Oct 5 05:52:55 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471146 on np0005471146.localdomain Oct 5 05:52:55 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:55 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:55 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:52:56 localhost podman[248157]: time="2025-10-05T09:52:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:52:56 localhost podman[248157]: @ - - [05/Oct/2025:09:52:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:52:56 localhost podman[248157]: @ - - [05/Oct/2025:09:52:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19288 "" "Go-http-client/1.1" Oct 5 05:52:56 localhost ceph-mon[302793]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:52:56 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:52:56 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:56 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:56 localhost ceph-mon[302793]: Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:52:56 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:52:56 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:52:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:52:56 localhost podman[303607]: 2025-10-05 09:52:56.911000188 +0000 UTC m=+0.077934130 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:52:56 localhost podman[303607]: 2025-10-05 09:52:56.924199027 +0000 UTC m=+0.091132949 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:52:56 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:52:57 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e6 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 5 05:52:57 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='client.? 172.18.0.103:0/3926179074' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:52:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:52:57 localhost ceph-mon[302793]: mon.np0005471152@3(peon).osd e81 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Oct 5 05:52:57 localhost ceph-mon[302793]: mon.np0005471152@3(peon).osd e81 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Oct 5 05:52:57 localhost ceph-mon[302793]: mon.np0005471152@3(peon).osd e82 e82: 6 total, 6 up, 6 in Oct 5 05:52:57 localhost systemd[1]: session-21.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-18.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-19.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: tmp-crun.eyUyqu.mount: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-24.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-17.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-14.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-22.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-20.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost podman[303631]: 2025-10-05 09:52:57.426335355 +0000 UTC m=+0.104035220 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001) Oct 5 05:52:57 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 21. Oct 5 05:52:57 localhost systemd[1]: session-25.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-23.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-26.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd[1]: session-26.scope: Consumed 3min 29.214s CPU time. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 18. Oct 5 05:52:57 localhost systemd[1]: session-16.scope: Deactivated successfully. Oct 5 05:52:57 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost podman[303631]: 2025-10-05 09:52:57.444273393 +0000 UTC m=+0.121973308 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:52:57 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 16 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 19. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 24. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 17. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 14. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 22. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 20. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 25. Oct 5 05:52:57 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 23. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 26. Oct 5 05:52:57 localhost systemd-logind[760]: Removed session 16. Oct 5 05:52:57 localhost sshd[303648]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:52:57 localhost systemd-logind[760]: New session 68 of user ceph-admin. Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' Oct 5 05:52:57 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.14120 172.18.0.103:0/920404092' entity='mgr.np0005471146.xqzesq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:52:57 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:52:57 localhost ceph-mon[302793]: from='client.? 172.18.0.103:0/3926179074' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:52:57 localhost ceph-mon[302793]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:52:57 localhost ceph-mon[302793]: Activating manager daemon np0005471151.jecxod Oct 5 05:52:57 localhost ceph-mon[302793]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:52:57 localhost ceph-mon[302793]: Manager daemon np0005471151.jecxod is now available Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:52:57 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:52:57 localhost systemd[1]: Started Session 68 of User ceph-admin. Oct 5 05:52:58 localhost systemd[1]: tmp-crun.2CvtrZ.mount: Deactivated successfully. Oct 5 05:52:58 localhost podman[303762]: 2025-10-05 09:52:58.937556898 +0000 UTC m=+0.107264158 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:52:59 localhost podman[303762]: 2025-10-05 09:52:59.068181011 +0000 UTC m=+0.237888291 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, build-date=2025-09-24T08:57:55, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, version=7, ceph=True, distribution-scope=public) Oct 5 05:53:00 localhost nova_compute[297130]: 2025-10-05 09:53:00.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:00 localhost ceph-mon[302793]: [05/Oct/2025:09:52:59] ENGINE Bus STARTING Oct 5 05:53:00 localhost ceph-mon[302793]: [05/Oct/2025:09:52:59] ENGINE Serving on http://172.18.0.107:8765 Oct 5 05:53:00 localhost ceph-mon[302793]: [05/Oct/2025:09:52:59] ENGINE Serving on https://172.18.0.107:7150 Oct 5 05:53:00 localhost ceph-mon[302793]: [05/Oct/2025:09:52:59] ENGINE Bus STARTED Oct 5 05:53:00 localhost ceph-mon[302793]: [05/Oct/2025:09:52:59] ENGINE Client ('172.18.0.107', 36886) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:00 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:01 localhost ceph-mon[302793]: mon.np0005471152@3(peon).osd e82 _set_new_cache_sizes cache_size:1019711986 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471146", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471146", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:53:02 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:53:02 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:53:02 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:02 localhost ceph-mon[302793]: Updating np0005471146.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.294 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.294 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.295 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.295 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.315 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.315 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.316 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:53:02 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:53:02 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2068059284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.749 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.929 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.930 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11984MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.931 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:53:02 localhost nova_compute[297130]: 2025-10-05 09:53:02.931 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.012 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.012 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.042 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:03 localhost ceph-mon[302793]: Updating np0005471146.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.534 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.538 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.556 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.557 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:53:03 localhost nova_compute[297130]: 2025-10-05 09:53:03.557 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:53:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:53:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:53:04 localhost systemd[1]: tmp-crun.25dZyB.mount: Deactivated successfully. Oct 5 05:53:04 localhost podman[304656]: 2025-10-05 09:53:04.204436071 +0000 UTC m=+0.102568497 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 05:53:04 localhost podman[304656]: 2025-10-05 09:53:04.212708974 +0000 UTC m=+0.110841360 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:04 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:04 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:53:04 localhost podman[304657]: 2025-10-05 09:53:04.297673314 +0000 UTC m=+0.194139264 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:53:04 localhost podman[304657]: 2025-10-05 09:53:04.377200628 +0000 UTC m=+0.273666578 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 05:53:04 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:53:04 localhost nova_compute[297130]: 2025-10-05 09:53:04.535 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:04 localhost nova_compute[297130]: 2025-10-05 09:53:04.536 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:04 localhost nova_compute[297130]: 2025-10-05 09:53:04.536 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:05 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:05 localhost nova_compute[297130]: 2025-10-05 09:53:05.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:05 localhost nova_compute[297130]: 2025-10-05 09:53:05.282 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:06 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:53:06 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:53:06 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:06 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:06 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:06 localhost nova_compute[297130]: 2025-10-05 09:53:06.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:53:06 localhost ceph-mon[302793]: mon.np0005471152@3(peon).osd e82 _set_new_cache_sizes cache_size:1020047173 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.063284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987063382, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11006, "num_deletes": 513, "total_data_size": 16349571, "memory_usage": 16973328, "flush_reason": "Manual Compaction"} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987146321, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11709591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11011, "table_properties": {"data_size": 11655435, "index_size": 28834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24261, "raw_key_size": 255652, "raw_average_key_size": 26, "raw_value_size": 11490849, "raw_average_value_size": 1185, "num_data_blocks": 1088, "num_entries": 9694, "num_filter_entries": 9694, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 1759657951, "file_creation_time": 1759657987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 83134 microseconds, and 27641 cpu microseconds. Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.146412) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11709591 bytes OK Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.146448) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.178636) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.178664) EVENT_LOG_v1 {"time_micros": 1759657987178658, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.178686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 16275460, prev total WAL file size 16275460, number of live WAL files 2. Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.181622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1887B)] Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987181732, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11711478, "oldest_snapshot_seqno": -1} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9185 keys, 11701799 bytes, temperature: kUnknown Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987255935, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11701799, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11648951, "index_size": 28811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 247238, "raw_average_key_size": 26, "raw_value_size": 11490907, "raw_average_value_size": 1251, "num_data_blocks": 1087, "num_entries": 9185, "num_filter_entries": 9185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759657987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.256831) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11701799 bytes Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.258625) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.5 rd, 156.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.2, 0.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9699, records dropped: 514 output_compression: NoCompression Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.258657) EVENT_LOG_v1 {"time_micros": 1759657987258643, "job": 4, "event": "compaction_finished", "compaction_time_micros": 74835, "compaction_time_cpu_micros": 35048, "output_level": 6, "num_output_files": 1, "total_output_size": 11701799, "num_input_records": 9699, "num_output_records": 9185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987261015, "job": 4, "event": "table_file_deletion", "file_number": 14} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759657987261250, "job": 4, "event": "table_file_deletion", "file_number": 8} Oct 5 05:53:07 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:53:07.181520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:53:07 localhost ceph-mon[302793]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:53:07 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:53:07 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:07 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:07 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:07 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:08 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:53:08 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:53:08 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:08 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:08 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:08 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:08 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:09 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:53:09 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:09 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:09 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:53:10 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:53:10 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:53:10 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:10 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:10 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:53:11 localhost ceph-mon[302793]: mon.np0005471152@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054620 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:11 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:53:11 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:53:11 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:11 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:11 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:11 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:12 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:53:12 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:53:12 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:12 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' Oct 5 05:53:12 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:12 localhost ceph-mon[302793]: from='mgr.17391 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:13 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef139600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Oct 5 05:53:13 localhost ceph-mon[302793]: mon.np0005471152@3(peon) e7 my rank is now 2 (was 3) Oct 5 05:53:13 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 5 05:53:13 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 5 05:53:13 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:53:13 localhost ceph-mon[302793]: paxos.2).electionLogic(26) init, last seen epoch 26 Oct 5 05:53:13 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:13 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:13 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:13 localhost ceph-mgr[301363]: --2- 172.18.0.108:0/3451461818 >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x55b2f8bc2400 0x55b2f89e9600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Oct 5 05:53:13 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Oct 5 05:53:13 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Oct 5 05:53:13 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef138f20 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Oct 5 05:53:13 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:14 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:53:14 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:53:14 localhost ceph-mon[302793]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:53:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:14 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:53:14 localhost ceph-mon[302793]: Remove daemons mon.np0005471146 Oct 5 05:53:14 localhost ceph-mon[302793]: Safe to remove mon.np0005471146: new quorum should be ['np0005471148', 'np0005471147', 'np0005471152', 'np0005471151', 'np0005471150'] (from ['np0005471148', 'np0005471147', 'np0005471152', 'np0005471151', 'np0005471150']) Oct 5 05:53:14 localhost ceph-mon[302793]: Removing monitor np0005471146 from monmap... Oct 5 05:53:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "mon rm", "name": "np0005471146"} : dispatch Oct 5 05:53:14 localhost ceph-mon[302793]: Removing daemon mon.np0005471146 from np0005471146.localdomain -- ports [] Oct 5 05:53:14 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:53:14 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:53:14 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:53:14 localhost ceph-mon[302793]: mon.np0005471147 calling monitor election Oct 5 05:53:14 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:53:14 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471147,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3,4) Oct 5 05:53:14 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:53:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:53:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:53:14 localhost podman[304753]: 2025-10-05 09:53:14.89978269 +0000 UTC m=+0.068529489 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:53:14 localhost podman[304753]: 2025-10-05 09:53:14.908662219 +0000 UTC m=+0.077409058 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:53:14 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:53:14 localhost podman[304754]: 2025-10-05 09:53:14.962908621 +0000 UTC m=+0.129052490 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:53:15 localhost podman[304754]: 2025-10-05 09:53:15.011656366 +0000 UTC m=+0.177800185 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:53:15 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:53:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:53:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:15 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:53:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:53:16 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:16 localhost ceph-mon[302793]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:53:16 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:53:16 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:16 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:16 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:53:16 localhost openstack_network_exporter[250246]: ERROR 09:53:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:53:16 localhost openstack_network_exporter[250246]: ERROR 09:53:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:16 localhost openstack_network_exporter[250246]: ERROR 09:53:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:16 localhost openstack_network_exporter[250246]: ERROR 09:53:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:53:16 localhost openstack_network_exporter[250246]: Oct 5 05:53:16 localhost openstack_network_exporter[250246]: ERROR 09:53:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:53:16 localhost openstack_network_exporter[250246]: Oct 5 05:53:17 localhost ceph-mon[302793]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:53:17 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:53:17 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:17 localhost ceph-mon[302793]: Removed label mon from host np0005471146.localdomain Oct 5 05:53:17 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:17 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:17 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:53:18 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:53:18 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:53:18 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:18 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:18 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:18 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:18 localhost podman[304795]: 2025-10-05 09:53:18.916618513 +0000 UTC m=+0.081387775 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41) Oct 5 05:53:18 localhost podman[304795]: 2025-10-05 09:53:18.953339083 +0000 UTC m=+0.118108305 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 05:53:18 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:53:19 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:53:19 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:53:19 localhost ceph-mon[302793]: Removed label mgr from host np0005471146.localdomain Oct 5 05:53:19 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:19 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:19 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:19 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:53:20.391 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:53:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:53:20.392 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:53:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:53:20.392 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:53:20 localhost podman[304868]: Oct 5 05:53:20 localhost podman[304868]: 2025-10-05 09:53:20.682562268 +0000 UTC m=+0.074542190 container create 9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mcclintock, release=553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:53:20 localhost systemd[1]: Started libpod-conmon-9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9.scope. Oct 5 05:53:20 localhost systemd[1]: Started libcrun container. Oct 5 05:53:20 localhost podman[304868]: 2025-10-05 09:53:20.652013105 +0000 UTC m=+0.043993047 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:20 localhost podman[304868]: 2025-10-05 09:53:20.752260107 +0000 UTC m=+0.144240029 container init 9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mcclintock, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Oct 5 05:53:20 localhost podman[304868]: 2025-10-05 09:53:20.763069789 +0000 UTC m=+0.155049721 container start 9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mcclintock, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, GIT_BRANCH=main, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Oct 5 05:53:20 localhost podman[304868]: 2025-10-05 09:53:20.763399277 +0000 UTC m=+0.155379229 container attach 9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mcclintock, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, name=rhceph, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553) Oct 5 05:53:20 localhost systemd[1]: libpod-9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9.scope: Deactivated successfully. Oct 5 05:53:20 localhost busy_mcclintock[304883]: 167 167 Oct 5 05:53:20 localhost podman[304868]: 2025-10-05 09:53:20.768487245 +0000 UTC m=+0.160467197 container died 9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mcclintock, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Oct 5 05:53:20 localhost podman[304888]: 2025-10-05 09:53:20.858560933 +0000 UTC m=+0.081049006 container remove 9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mcclintock, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Oct 5 05:53:20 localhost systemd[1]: libpod-conmon-9cbfdb1cf052f71636951fdb01ad7ecfaa08032f6c7b2874ccb5e5af153045d9.scope: Deactivated successfully. Oct 5 05:53:20 localhost ceph-mon[302793]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:53:20 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:53:20 localhost ceph-mon[302793]: Removed label _admin from host np0005471146.localdomain Oct 5 05:53:20 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:20 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:20 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:21 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:21 localhost podman[304957]: Oct 5 05:53:21 localhost podman[304957]: 2025-10-05 09:53:21.558242475 +0000 UTC m=+0.072274680 container create d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_turing, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Oct 5 05:53:21 localhost systemd[1]: Started libpod-conmon-d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b.scope. Oct 5 05:53:21 localhost systemd[1]: Started libcrun container. Oct 5 05:53:21 localhost podman[304957]: 2025-10-05 09:53:21.618117728 +0000 UTC m=+0.132149923 container init d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_turing, ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Oct 5 05:53:21 localhost podman[304957]: 2025-10-05 09:53:21.627160692 +0000 UTC m=+0.141192887 container start d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_turing, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=) Oct 5 05:53:21 localhost podman[304957]: 2025-10-05 09:53:21.627433159 +0000 UTC m=+0.141465354 container attach d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_turing, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:53:21 localhost awesome_turing[304972]: 167 167 Oct 5 05:53:21 localhost podman[304957]: 2025-10-05 09:53:21.531037741 +0000 UTC m=+0.045069976 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:21 localhost systemd[1]: libpod-d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b.scope: Deactivated successfully. Oct 5 05:53:21 localhost podman[304957]: 2025-10-05 09:53:21.646892244 +0000 UTC m=+0.160924529 container died d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_turing, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux ) Oct 5 05:53:21 localhost systemd[1]: var-lib-containers-storage-overlay-0c9c4907f7d68c680e31f4ee288d078cbbf02e6b22a079b1dceb426c764eb215-merged.mount: Deactivated successfully. Oct 5 05:53:21 localhost systemd[1]: var-lib-containers-storage-overlay-74b75630404f345021b76cae30d714a8e9daeef4f60e35411cfbf6756ef27165-merged.mount: Deactivated successfully. Oct 5 05:53:21 localhost podman[304977]: 2025-10-05 09:53:21.745975935 +0000 UTC m=+0.090705876 container remove d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_turing, io.buildah.version=1.33.12, release=553, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux ) Oct 5 05:53:21 localhost systemd[1]: libpod-conmon-d3b6ccd10a59c5c8b80626705e0bc94ca668964a6e3a012ce1b0479886f5235b.scope: Deactivated successfully. Oct 5 05:53:21 localhost ceph-mon[302793]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:53:21 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:53:21 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:21 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:21 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:53:22 localhost podman[305052]: Oct 5 05:53:22 localhost podman[305052]: 2025-10-05 09:53:22.573897273 +0000 UTC m=+0.075831715 container create 0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_boyd, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7) Oct 5 05:53:22 localhost systemd[1]: Started libpod-conmon-0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12.scope. Oct 5 05:53:22 localhost systemd[1]: Started libcrun container. Oct 5 05:53:22 localhost podman[305052]: 2025-10-05 09:53:22.636239475 +0000 UTC m=+0.138173917 container init 0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_boyd, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:53:22 localhost podman[305052]: 2025-10-05 09:53:22.543162965 +0000 UTC m=+0.045097427 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:22 localhost podman[305052]: 2025-10-05 09:53:22.645260728 +0000 UTC m=+0.147195160 container start 0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_boyd, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public) Oct 5 05:53:22 localhost stupefied_boyd[305067]: 167 167 Oct 5 05:53:22 localhost systemd[1]: libpod-0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12.scope: Deactivated successfully. Oct 5 05:53:22 localhost podman[305052]: 2025-10-05 09:53:22.645559676 +0000 UTC m=+0.147494198 container attach 0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_boyd, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, distribution-scope=public, GIT_CLEAN=True, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Oct 5 05:53:22 localhost podman[305052]: 2025-10-05 09:53:22.65279929 +0000 UTC m=+0.154733722 container died 0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_boyd, build-date=2025-09-24T08:57:55, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, vcs-type=git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:53:22 localhost systemd[1]: tmp-crun.xdEUzp.mount: Deactivated successfully. Oct 5 05:53:22 localhost systemd[1]: var-lib-containers-storage-overlay-691b2d7c02571537af7f876c7a8568aed1c85ebb277291ba5e32bb26a6357b14-merged.mount: Deactivated successfully. Oct 5 05:53:22 localhost podman[305072]: 2025-10-05 09:53:22.742824218 +0000 UTC m=+0.085959349 container remove 0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_boyd, architecture=x86_64, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=) Oct 5 05:53:22 localhost systemd[1]: libpod-conmon-0734d4e5802cae896f4c6d493b36bb93cf6c298de1f9bd95f534ff8b453cdb12.scope: Deactivated successfully. Oct 5 05:53:22 localhost ceph-mon[302793]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:53:22 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:53:22 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:22 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:22 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:53:23 localhost podman[305149]: Oct 5 05:53:23 localhost podman[305149]: 2025-10-05 09:53:23.584142057 +0000 UTC m=+0.080820700 container create 8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-type=git, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph) Oct 5 05:53:23 localhost systemd[1]: Started libpod-conmon-8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952.scope. Oct 5 05:53:23 localhost systemd[1]: Started libcrun container. Oct 5 05:53:23 localhost podman[305149]: 2025-10-05 09:53:23.645603584 +0000 UTC m=+0.142282227 container init 8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, RELEASE=main, version=7, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:53:23 localhost podman[305149]: 2025-10-05 09:53:23.552920506 +0000 UTC m=+0.049599219 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:23 localhost podman[305149]: 2025-10-05 09:53:23.654652188 +0000 UTC m=+0.151330841 container start 8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, ceph=True, release=553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Oct 5 05:53:23 localhost podman[305149]: 2025-10-05 09:53:23.655065249 +0000 UTC m=+0.151743922 container attach 8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:53:23 localhost elastic_grothendieck[305164]: 167 167 Oct 5 05:53:23 localhost systemd[1]: libpod-8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952.scope: Deactivated successfully. Oct 5 05:53:23 localhost podman[305149]: 2025-10-05 09:53:23.658530702 +0000 UTC m=+0.155209375 container died 8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True) Oct 5 05:53:23 localhost systemd[1]: var-lib-containers-storage-overlay-d5d49db630169a14d697e54220975167013e510bf3ec634b7a3a7bd3ea4359db-merged.mount: Deactivated successfully. Oct 5 05:53:23 localhost podman[305169]: 2025-10-05 09:53:23.759731621 +0000 UTC m=+0.085357093 container remove 8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.33.12, release=553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Oct 5 05:53:23 localhost systemd[1]: libpod-conmon-8f78d891bf4c8db16bff2cd2312ad07b34dbb8ebf01dfc411cb3a6b27be19952.scope: Deactivated successfully. Oct 5 05:53:23 localhost ceph-mon[302793]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:53:23 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:53:23 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:53:23 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:23 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:53:23 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:23 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:53:23 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:23 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:53:24 localhost podman[305221]: 2025-10-05 09:53:24.156143787 +0000 UTC m=+0.095648470 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:53:24 localhost podman[305221]: 2025-10-05 09:53:24.19336537 +0000 UTC m=+0.132870063 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:53:24 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:53:24 localhost podman[305255]: Oct 5 05:53:24 localhost podman[305255]: 2025-10-05 09:53:24.487258732 +0000 UTC m=+0.070867891 container create 44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_black, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph) Oct 5 05:53:24 localhost systemd[1]: Started libpod-conmon-44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e.scope. Oct 5 05:53:24 localhost systemd[1]: Started libcrun container. Oct 5 05:53:24 localhost podman[305255]: 2025-10-05 09:53:24.5495164 +0000 UTC m=+0.133125589 container init 44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_black, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, version=7, ceph=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux ) Oct 5 05:53:24 localhost podman[305255]: 2025-10-05 09:53:24.559447409 +0000 UTC m=+0.143056598 container start 44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_black, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph) Oct 5 05:53:24 localhost podman[305255]: 2025-10-05 09:53:24.559744017 +0000 UTC m=+0.143353236 container attach 44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_black, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7) Oct 5 05:53:24 localhost podman[305255]: 2025-10-05 09:53:24.461388295 +0000 UTC m=+0.044997534 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:24 localhost sleepy_black[305270]: 167 167 Oct 5 05:53:24 localhost systemd[1]: libpod-44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e.scope: Deactivated successfully. Oct 5 05:53:24 localhost podman[305255]: 2025-10-05 09:53:24.56429655 +0000 UTC m=+0.147905789 container died 44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_black, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Oct 5 05:53:24 localhost podman[305275]: 2025-10-05 09:53:24.665828036 +0000 UTC m=+0.090481659 container remove 44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_black, version=7, GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Oct 5 05:53:24 localhost systemd[1]: libpod-conmon-44a8a78854ec3797a085f7baa1ed1bccf27a84d1a08acba7760398d4d7a0627e.scope: Deactivated successfully. Oct 5 05:53:24 localhost systemd[1]: var-lib-containers-storage-overlay-b68fdfc51a0a87cc544cb4764d0a4eb61a02278835e8176cd52ba008fbe3cdec-merged.mount: Deactivated successfully. Oct 5 05:53:25 localhost podman[305345]: Oct 5 05:53:25 localhost podman[305345]: 2025-10-05 09:53:25.510251739 +0000 UTC m=+0.080159461 container create 72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_robinson, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main) Oct 5 05:53:25 localhost systemd[1]: Started libpod-conmon-72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219.scope. Oct 5 05:53:25 localhost systemd[1]: Started libcrun container. Oct 5 05:53:25 localhost podman[305345]: 2025-10-05 09:53:25.477083736 +0000 UTC m=+0.046991548 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:25 localhost podman[305345]: 2025-10-05 09:53:25.581152261 +0000 UTC m=+0.151059983 container init 72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_robinson, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, version=7, vcs-type=git, release=553, ceph=True) Oct 5 05:53:25 localhost podman[305345]: 2025-10-05 09:53:25.589296051 +0000 UTC m=+0.159203783 container start 72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_robinson, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Oct 5 05:53:25 localhost podman[305345]: 2025-10-05 09:53:25.589542008 +0000 UTC m=+0.159449730 container attach 72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_robinson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, version=7, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Oct 5 05:53:25 localhost dazzling_robinson[305360]: 167 167 Oct 5 05:53:25 localhost systemd[1]: libpod-72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219.scope: Deactivated successfully. Oct 5 05:53:25 localhost podman[305345]: 2025-10-05 09:53:25.593472293 +0000 UTC m=+0.163380025 container died 72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_robinson, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.tags=rhceph ceph, release=553, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Oct 5 05:53:25 localhost podman[305365]: 2025-10-05 09:53:25.687888119 +0000 UTC m=+0.081862278 container remove 72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_robinson, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux ) Oct 5 05:53:25 localhost systemd[1]: var-lib-containers-storage-overlay-3b61dc5f85c8198ef8fdf093c81aba14a05882d12188749ba3179a4d203b2c78-merged.mount: Deactivated successfully. Oct 5 05:53:25 localhost systemd[1]: libpod-conmon-72855a4f49fe669cc3c26645765323530c30e9a15d7da9d3b9d85245af024219.scope: Deactivated successfully. Oct 5 05:53:25 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:25 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:25 localhost ceph-mon[302793]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:53:25 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:25 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:53:26 localhost podman[248157]: time="2025-10-05T09:53:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:53:26 localhost podman[248157]: @ - - [05/Oct/2025:09:53:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:53:26 localhost podman[248157]: @ - - [05/Oct/2025:09:53:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19301 "" "Go-http-client/1.1" Oct 5 05:53:26 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:26 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:26 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:53:27 localhost podman[305398]: 2025-10-05 09:53:27.532510834 +0000 UTC m=+0.085853134 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:53:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:53:27 localhost podman[305398]: 2025-10-05 09:53:27.551149307 +0000 UTC m=+0.104491617 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:53:27 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:53:27 localhost systemd[1]: tmp-crun.DNBNMD.mount: Deactivated successfully. Oct 5 05:53:27 localhost podman[305440]: 2025-10-05 09:53:27.649580461 +0000 UTC m=+0.092421953 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2) Oct 5 05:53:27 localhost podman[305440]: 2025-10-05 09:53:27.692550679 +0000 UTC m=+0.135392191 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:53:27 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:53:28 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:28 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:28 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:28 localhost ceph-mon[302793]: Removing np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:28 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:28 localhost ceph-mon[302793]: Removing np0005471146.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:53:28 localhost ceph-mon[302793]: Removing np0005471146.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:53:28 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:28 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:29 localhost ceph-mon[302793]: Removing daemon mgr.np0005471146.xqzesq from np0005471146.localdomain -- ports [9283, 8765] Oct 5 05:53:31 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:32 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost ceph-mon[302793]: Added label _no_schedule to host np0005471146.localdomain Oct 5 05:53:32 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost ceph-mon[302793]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471146.localdomain Oct 5 05:53:32 localhost ceph-mon[302793]: Removing key for mgr.np0005471146.xqzesq Oct 5 05:53:32 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth rm", "entity": "mgr.np0005471146.xqzesq"} : dispatch Oct 5 05:53:32 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005471146.xqzesq"}]': finished Oct 5 05:53:32 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:32 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:33 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:33 localhost ceph-mon[302793]: Removing daemon crash.np0005471146 from np0005471146.localdomain -- ports [] Oct 5 05:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:53:34 localhost podman[305778]: 2025-10-05 09:53:34.369039059 +0000 UTC m=+0.077512630 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 05:53:34 localhost podman[305778]: 2025-10-05 09:53:34.40320779 +0000 UTC m=+0.111681361 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:53:34 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:53:34 localhost podman[305800]: 2025-10-05 09:53:34.528002944 +0000 UTC m=+0.083797809 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Oct 5 05:53:34 localhost podman[305800]: 2025-10-05 09:53:34.633418106 +0000 UTC m=+0.189212961 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:53:34 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain"} : dispatch Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain"}]': finished Oct 5 05:53:34 localhost ceph-mon[302793]: Removed host np0005471146.localdomain Oct 5 05:53:34 localhost ceph-mon[302793]: Removing key for client.crash.np0005471146.localdomain Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth rm", "entity": "client.crash.np0005471146.localdomain"} : dispatch Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005471146.localdomain"}]': finished Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:34 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:35 localhost ceph-mon[302793]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:53:35 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:53:35 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:35 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:35 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:36 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:36 localhost ceph-mon[302793]: Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:53:36 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:53:36 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:36 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:36 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:37 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:53:37 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:53:37 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:37 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:37 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:37 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:38 localhost ceph-mon[302793]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:53:38 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:53:38 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:38 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:38 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:39 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:53:39 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:53:39 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:39 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:39 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:40 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:53:40 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:53:40 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:40 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:40 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:41 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:41 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:53:41 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:53:41 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:41 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:41 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:53:42 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:53:42 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:53:42 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:42 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:42 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:53:42 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:43 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:53:43 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:53:43 localhost ceph-mon[302793]: Saving service mon spec with placement label:mon Oct 5 05:53:43 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:43 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:43 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:44 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:53:44 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:53:44 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:44 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:44 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:53:44 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:44 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:53:45 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef1391e0 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Oct 5 05:53:45 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:53:45 localhost ceph-mon[302793]: paxos.2).electionLogic(28) init, last seen epoch 28 Oct 5 05:53:45 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:45 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:45 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:53:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:53:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:53:45 localhost podman[305842]: 2025-10-05 09:53:45.899236715 +0000 UTC m=+0.067924832 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Oct 5 05:53:45 localhost podman[305842]: 2025-10-05 09:53:45.911135426 +0000 UTC m=+0.079823553 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Oct 5 05:53:45 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:53:45 localhost systemd[1]: tmp-crun.JOOzbq.mount: Deactivated successfully. Oct 5 05:53:45 localhost podman[305843]: 2025-10-05 09:53:45.958245856 +0000 UTC m=+0.126913763 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:53:45 localhost podman[305843]: 2025-10-05 09:53:45.970163617 +0000 UTC m=+0.138831474 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:53:45 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:53:46 localhost ceph-mon[302793]: Remove daemons mon.np0005471150 Oct 5 05:53:46 localhost ceph-mon[302793]: Safe to remove mon.np0005471150: new quorum should be ['np0005471148', 'np0005471147', 'np0005471152', 'np0005471151'] (from ['np0005471148', 'np0005471147', 'np0005471152', 'np0005471151']) Oct 5 05:53:46 localhost ceph-mon[302793]: Removing monitor np0005471150 from monmap... Oct 5 05:53:46 localhost ceph-mon[302793]: Removing daemon mon.np0005471150 from np0005471150.localdomain -- ports [] Oct 5 05:53:46 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:53:46 localhost ceph-mon[302793]: mon.np0005471147 calling monitor election Oct 5 05:53:46 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:53:46 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:53:46 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471147,np0005471152,np0005471151 in quorum (ranks 0,1,2,3) Oct 5 05:53:46 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:53:46 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:46 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:46 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:53:46 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:46 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:53:46 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:46 localhost openstack_network_exporter[250246]: ERROR 09:53:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:53:46 localhost openstack_network_exporter[250246]: ERROR 09:53:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:53:46 localhost openstack_network_exporter[250246]: Oct 5 05:53:46 localhost openstack_network_exporter[250246]: ERROR 09:53:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:46 localhost openstack_network_exporter[250246]: ERROR 09:53:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:53:46 localhost openstack_network_exporter[250246]: ERROR 09:53:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:53:46 localhost openstack_network_exporter[250246]: Oct 5 05:53:47 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:47 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:47 localhost ceph-mon[302793]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:53:47 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:53:47 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:53:47 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:47 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:47 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:53:48 localhost ceph-mon[302793]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:53:48 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:53:48 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:48 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:48 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:49 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:53:49 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:53:49 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:49 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:49 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:53:49 localhost podman[305883]: 2025-10-05 09:53:49.910630832 +0000 UTC m=+0.075404534 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 05:53:49 localhost podman[305883]: 2025-10-05 09:53:49.951231436 +0000 UTC m=+0.116005098 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., release=1755695350, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:53:49 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:53:50 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:53:50 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:53:50 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:50 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:50 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:50 localhost podman[305957]: Oct 5 05:53:50 localhost podman[305957]: 2025-10-05 09:53:50.764420437 +0000 UTC m=+0.075801054 container create 7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_haibt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, CEPH_POINT_RELEASE=) Oct 5 05:53:50 localhost systemd[1]: Started libpod-conmon-7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0.scope. Oct 5 05:53:50 localhost systemd[1]: Started libcrun container. Oct 5 05:53:50 localhost podman[305957]: 2025-10-05 09:53:50.831484405 +0000 UTC m=+0.142865022 container init 7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_haibt, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=) Oct 5 05:53:50 localhost podman[305957]: 2025-10-05 09:53:50.734130632 +0000 UTC m=+0.045511269 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:50 localhost podman[305957]: 2025-10-05 09:53:50.842251516 +0000 UTC m=+0.153632123 container start 7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_haibt, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, architecture=x86_64, name=rhceph, RELEASE=main) Oct 5 05:53:50 localhost podman[305957]: 2025-10-05 09:53:50.842545324 +0000 UTC m=+0.153925991 container attach 7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_haibt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_CLEAN=True) Oct 5 05:53:50 localhost interesting_haibt[305972]: 167 167 Oct 5 05:53:50 localhost systemd[1]: libpod-7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0.scope: Deactivated successfully. Oct 5 05:53:50 localhost podman[305957]: 2025-10-05 09:53:50.845816351 +0000 UTC m=+0.157197008 container died 7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_haibt, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55) Oct 5 05:53:50 localhost systemd[1]: tmp-crun.ws4TXI.mount: Deactivated successfully. Oct 5 05:53:50 localhost systemd[1]: var-lib-containers-storage-overlay-916e4373c0f6a1d1925bb031653758a87726aa39676d673dded65d8e75357334-merged.mount: Deactivated successfully. Oct 5 05:53:50 localhost podman[305977]: 2025-10-05 09:53:50.950971977 +0000 UTC m=+0.092443283 container remove 7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_haibt, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, ceph=True, version=7, io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux ) Oct 5 05:53:50 localhost systemd[1]: libpod-conmon-7222e460a861a6e8b8f0798c052560ac5d928288ecbd2746e73afea1c09826a0.scope: Deactivated successfully. Oct 5 05:53:51 localhost ceph-mon[302793]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:53:51 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:53:51 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:51 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:51 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:53:51 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:51 localhost podman[306047]: Oct 5 05:53:51 localhost podman[306047]: 2025-10-05 09:53:51.650171176 +0000 UTC m=+0.076163205 container create b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_ganguly, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph) Oct 5 05:53:51 localhost systemd[1]: Started libpod-conmon-b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa.scope. Oct 5 05:53:51 localhost systemd[1]: Started libcrun container. Oct 5 05:53:51 localhost podman[306047]: 2025-10-05 09:53:51.717631674 +0000 UTC m=+0.143623703 container init b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_ganguly, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, release=553, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7) Oct 5 05:53:51 localhost podman[306047]: 2025-10-05 09:53:51.618536392 +0000 UTC m=+0.044528451 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:51 localhost cool_ganguly[306062]: 167 167 Oct 5 05:53:51 localhost systemd[1]: libpod-b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa.scope: Deactivated successfully. Oct 5 05:53:51 localhost podman[306047]: 2025-10-05 09:53:51.730792268 +0000 UTC m=+0.156784347 container start b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_ganguly, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, version=7, GIT_CLEAN=True) Oct 5 05:53:51 localhost podman[306047]: 2025-10-05 09:53:51.731430736 +0000 UTC m=+0.157422815 container attach b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_ganguly, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Oct 5 05:53:51 localhost podman[306047]: 2025-10-05 09:53:51.734690314 +0000 UTC m=+0.160682393 container died b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_ganguly, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, version=7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12) Oct 5 05:53:51 localhost podman[306067]: 2025-10-05 09:53:51.821604677 +0000 UTC m=+0.083547774 container remove b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_ganguly, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, architecture=x86_64) Oct 5 05:53:51 localhost systemd[1]: libpod-conmon-b7666854c6b2cc1c6c72e77d20be67ff0d340a611fd81fad9a177edf15a401aa.scope: Deactivated successfully. Oct 5 05:53:51 localhost systemd[1]: var-lib-containers-storage-overlay-b17ad681b8516cadd1a08587002f5ac24f5fef2c92887d9da0e04d6675ab188d-merged.mount: Deactivated successfully. Oct 5 05:53:52 localhost podman[306143]: Oct 5 05:53:52 localhost podman[306143]: 2025-10-05 09:53:52.61930353 +0000 UTC m=+0.072455263 container create 8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_jepsen, RELEASE=main, release=553, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, io.buildah.version=1.33.12) Oct 5 05:53:52 localhost systemd[1]: Started libpod-conmon-8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da.scope. Oct 5 05:53:52 localhost systemd[1]: Started libcrun container. Oct 5 05:53:52 localhost podman[306143]: 2025-10-05 09:53:52.686993926 +0000 UTC m=+0.140145659 container init 8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_jepsen, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:53:52 localhost podman[306143]: 2025-10-05 09:53:52.58999433 +0000 UTC m=+0.043146103 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:52 localhost podman[306143]: 2025-10-05 09:53:52.69646139 +0000 UTC m=+0.149613143 container start 8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_jepsen, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, version=7) Oct 5 05:53:52 localhost compassionate_jepsen[306158]: 167 167 Oct 5 05:53:52 localhost podman[306143]: 2025-10-05 09:53:52.698574527 +0000 UTC m=+0.151726261 container attach 8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_jepsen, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, release=553) Oct 5 05:53:52 localhost systemd[1]: libpod-8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da.scope: Deactivated successfully. Oct 5 05:53:52 localhost podman[306143]: 2025-10-05 09:53:52.701845186 +0000 UTC m=+0.154996979 container died 8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_jepsen, release=553, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Oct 5 05:53:52 localhost ceph-mon[302793]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:53:52 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:53:52 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:52 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:52 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:53:52 localhost podman[306163]: 2025-10-05 09:53:52.793347613 +0000 UTC m=+0.080623345 container remove 8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_jepsen, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Oct 5 05:53:52 localhost systemd[1]: libpod-conmon-8d0886fa57fe4b92eded2e67f52ca8984c65aa13aa9a39506a16028965e7a9da.scope: Deactivated successfully. Oct 5 05:53:52 localhost systemd[1]: var-lib-containers-storage-overlay-0acf504914868f1a442f1ca18ab074ecbe8d3ee4a86907c49a920d5c721ec9f8-merged.mount: Deactivated successfully. Oct 5 05:53:53 localhost podman[306236]: Oct 5 05:53:53 localhost podman[306236]: 2025-10-05 09:53:53.604668453 +0000 UTC m=+0.080324726 container create da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_hermann, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, RELEASE=main, version=7, com.redhat.component=rhceph-container, release=553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public) Oct 5 05:53:53 localhost systemd[1]: Started libpod-conmon-da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5.scope. Oct 5 05:53:53 localhost systemd[1]: Started libcrun container. Oct 5 05:53:53 localhost podman[306236]: 2025-10-05 09:53:53.664381743 +0000 UTC m=+0.140038006 container init da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_hermann, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc.) Oct 5 05:53:53 localhost podman[306236]: 2025-10-05 09:53:53.57299853 +0000 UTC m=+0.048654863 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:53 localhost podman[306236]: 2025-10-05 09:53:53.673991372 +0000 UTC m=+0.149647645 container start da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_hermann, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, distribution-scope=public, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:53:53 localhost podman[306236]: 2025-10-05 09:53:53.674242009 +0000 UTC m=+0.149898322 container attach da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_hermann, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7) Oct 5 05:53:53 localhost mystifying_hermann[306251]: 167 167 Oct 5 05:53:53 localhost systemd[1]: libpod-da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5.scope: Deactivated successfully. Oct 5 05:53:53 localhost podman[306236]: 2025-10-05 09:53:53.677446455 +0000 UTC m=+0.153102798 container died da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_hermann, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7) Oct 5 05:53:53 localhost ceph-mon[302793]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:53:53 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:53:53 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:53 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:53 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:53:53 localhost podman[306256]: 2025-10-05 09:53:53.771762608 +0000 UTC m=+0.082624489 container remove da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_hermann, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Oct 5 05:53:53 localhost systemd[1]: libpod-conmon-da29680dbaaa47f203b02be714b5107d2a03ec3857aa66fe5c52d5dd2e6820e5.scope: Deactivated successfully. Oct 5 05:53:53 localhost systemd[1]: var-lib-containers-storage-overlay-5dd017e6ff8a3adceb51b1a132d6b040900e204cd0ba378fc09d6e24b12bb59c-merged.mount: Deactivated successfully. Oct 5 05:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:53:54 localhost podman[306323]: 2025-10-05 09:53:54.42396318 +0000 UTC m=+0.079279749 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:53:54 localhost podman[306333]: Oct 5 05:53:54 localhost podman[306333]: 2025-10-05 09:53:54.452371735 +0000 UTC m=+0.081630371 container create d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goldberg, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, release=553, name=rhceph, GIT_CLEAN=True) Oct 5 05:53:54 localhost podman[306323]: 2025-10-05 09:53:54.458315445 +0000 UTC m=+0.113632014 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 05:53:54 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:53:54 localhost systemd[1]: Started libpod-conmon-d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93.scope. Oct 5 05:53:54 localhost systemd[1]: Started libcrun container. Oct 5 05:53:54 localhost podman[306333]: 2025-10-05 09:53:54.517590203 +0000 UTC m=+0.146848829 container init d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goldberg, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., release=553, ceph=True, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.buildah.version=1.33.12) Oct 5 05:53:54 localhost podman[306333]: 2025-10-05 09:53:54.418550604 +0000 UTC m=+0.047809240 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:53:54 localhost podman[306333]: 2025-10-05 09:53:54.525700572 +0000 UTC m=+0.154959198 container start d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goldberg, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:53:54 localhost podman[306333]: 2025-10-05 09:53:54.526040241 +0000 UTC m=+0.155298907 container attach d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goldberg, distribution-scope=public, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux ) Oct 5 05:53:54 localhost dreamy_goldberg[306360]: 167 167 Oct 5 05:53:54 localhost systemd[1]: libpod-d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93.scope: Deactivated successfully. Oct 5 05:53:54 localhost podman[306333]: 2025-10-05 09:53:54.529510534 +0000 UTC m=+0.158769220 container died d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goldberg, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553) Oct 5 05:53:54 localhost podman[306365]: 2025-10-05 09:53:54.627402673 +0000 UTC m=+0.083838501 container remove d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_goldberg, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12) Oct 5 05:53:54 localhost systemd[1]: libpod-conmon-d4481596409509b33aee1caf5bbf773eefdbe01671e3515e070df5b06b55be93.scope: Deactivated successfully. Oct 5 05:53:54 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:53:54 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:53:54 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:54 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:54 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:53:54 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:53:54 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:53:54 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:54 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:54 localhost systemd[1]: var-lib-containers-storage-overlay-ba24cc6dbc5bac716503ca1286028cb2da815e6d41f0c916c1885a387c9a59b3-merged.mount: Deactivated successfully. Oct 5 05:53:56 localhost podman[248157]: time="2025-10-05T09:53:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:53:56 localhost podman[248157]: @ - - [05/Oct/2025:09:53:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:53:56 localhost podman[248157]: @ - - [05/Oct/2025:09:53:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19291 "" "Go-http-client/1.1" Oct 5 05:53:56 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:53:56 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:56 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:56 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:53:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:53:57 localhost podman[306666]: 2025-10-05 09:53:57.692676964 +0000 UTC m=+0.074466507 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:53:57 localhost podman[306666]: 2025-10-05 09:53:57.70622916 +0000 UTC m=+0.088018713 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:53:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:53:57 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:53:57 localhost podman[306723]: 2025-10-05 09:53:57.793973826 +0000 UTC m=+0.066104773 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:53:57 localhost podman[306723]: 2025-10-05 09:53:57.833364138 +0000 UTC m=+0.105495075 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:53:57 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:53:57 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:57 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:57 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:57 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:57 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:57 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:58 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:58 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:58 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:58 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:58 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:53:58 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:58 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:58 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:58 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:53:58 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:58 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:53:59 localhost ceph-mon[302793]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:53:59 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:53:59 localhost ceph-mon[302793]: Deploying daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:53:59 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:59 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:53:59 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:00 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:00 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:00 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:00 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:00 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:01 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Oct 5 05:54:01 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Oct 5 05:54:01 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Oct 5 05:54:01 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x55b2ef138f20 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0 Oct 5 05:54:01 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:54:01 localhost ceph-mon[302793]: paxos.2).electionLogic(30) init, last seen epoch 30 Oct 5 05:54:01 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:01 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.297 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:54:02 localhost nova_compute[297130]: 2025-10-05 09:54:02.297 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:54:04 localhost systemd[1]: tmp-crun.CoKCGC.mount: Deactivated successfully. Oct 5 05:54:04 localhost podman[306774]: 2025-10-05 09:54:04.917311661 +0000 UTC m=+0.081060676 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3) Oct 5 05:54:04 localhost systemd[1]: tmp-crun.EyTNE6.mount: Deactivated successfully. Oct 5 05:54:04 localhost podman[306773]: 2025-10-05 09:54:04.993699511 +0000 UTC m=+0.161282509 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 05:54:05 localhost podman[306773]: 2025-10-05 09:54:05.004124242 +0000 UTC m=+0.171707210 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:54:05 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:54:05 localhost podman[306774]: 2025-10-05 09:54:05.022150538 +0000 UTC m=+0.185899513 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:54:05 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e9 handle_timecheck drop unexpected msg Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471152@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:06 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:06 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471147 calling monitor election Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471151 in quorum (ranks 0,2,3) Oct 5 05:54:06 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:54:06 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471147,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3,4) Oct 5 05:54:06 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:54:06 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:06 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.162 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.865s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.369 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.370 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11971MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.370 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.370 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.459 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.460 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:54:07 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:54:07 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:07 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:54:07 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:07 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:54:07 localhost nova_compute[297130]: 2025-10-05 09:54:07.734 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:54:08 localhost nova_compute[297130]: 2025-10-05 09:54:08.183 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:54:08 localhost nova_compute[297130]: 2025-10-05 09:54:08.188 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:54:08 localhost nova_compute[297130]: 2025-10-05 09:54:08.206 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:54:08 localhost nova_compute[297130]: 2025-10-05 09:54:08.207 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:54:08 localhost nova_compute[297130]: 2025-10-05 09:54:08.207 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:54:08 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:54:08 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:08 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:08 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:08 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:54:09 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:54:09 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:09 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:09 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:09 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.206 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.207 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.208 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.208 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.225 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.226 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.227 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.227 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.228 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.228 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:54:10 localhost nova_compute[297130]: 2025-10-05 09:54:10.229 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:54:10 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:54:10 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:54:10 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:10 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:10 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.379493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051379565, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2855, "num_deletes": 252, "total_data_size": 5216599, "memory_usage": 5290704, "flush_reason": "Manual Compaction"} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051415048, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2920031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11016, "largest_seqno": 13866, "table_properties": {"data_size": 2908327, "index_size": 7249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30750, "raw_average_key_size": 22, "raw_value_size": 2882490, "raw_average_value_size": 2143, "num_data_blocks": 316, "num_entries": 1345, "num_filter_entries": 1345, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657987, "oldest_key_time": 1759657987, "file_creation_time": 1759658051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 35602 microseconds, and 7848 cpu microseconds. Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:11 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.415096) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2920031 bytes OK Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.415123) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.422394) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.422418) EVENT_LOG_v1 {"time_micros": 1759658051422411, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.422438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5202830, prev total WAL file size 5217708, number of live WAL files 2. Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.423647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2851KB)], [15(11MB)] Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051423699, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14621830, "oldest_snapshot_seqno": -1} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9986 keys, 13437334 bytes, temperature: kUnknown Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051533107, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13437334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13377984, "index_size": 33265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25029, "raw_key_size": 266695, "raw_average_key_size": 26, "raw_value_size": 13204865, "raw_average_value_size": 1322, "num_data_blocks": 1277, "num_entries": 9986, "num_filter_entries": 9986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759658051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.533415) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13437334 bytes Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.542200) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.5 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.6) OK, records in: 10530, records dropped: 544 output_compression: NoCompression Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.542231) EVENT_LOG_v1 {"time_micros": 1759658051542218, "job": 6, "event": "compaction_finished", "compaction_time_micros": 109506, "compaction_time_cpu_micros": 44080, "output_level": 6, "num_output_files": 1, "total_output_size": 13437334, "num_input_records": 10530, "num_output_records": 9986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051543082, "job": 6, "event": "table_file_deletion", "file_number": 17} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658051544666, "job": 6, "event": "table_file_deletion", "file_number": 15} Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.423539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.544742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.544748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.544928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.544932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:11.544934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:11 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:54:11 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:54:11 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:11 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:11 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:12 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:54:12 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:54:12 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:12 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:12 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:54:13 localhost ceph-mon[302793]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:54:13 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:13 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:13 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:13 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:54:14 localhost ceph-mon[302793]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:54:14 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:14 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:15 localhost ceph-mon[302793]: Reconfig service osd.default_drive_group Oct 5 05:54:15 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:54:15 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:54:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:15 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e9 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 5 05:54:16 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/496180965' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 e83: 6 total, 6 up, 6 in Oct 5 05:54:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:54:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr handle_mgr_map Activating! Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr handle_mgr_map I am now activating Oct 5 05:54:16 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:16 localhost systemd-logind[760]: Session 68 logged out. Waiting for processes to exit. Oct 5 05:54:16 localhost systemd[1]: session-68.scope: Deactivated successfully. Oct 5 05:54:16 localhost systemd[1]: session-68.scope: Consumed 17.432s CPU time. Oct 5 05:54:16 localhost systemd-logind[760]: Removed session 68. Oct 5 05:54:16 localhost ceph-mgr[301363]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: balancer Oct 5 05:54:16 localhost ceph-mgr[301363]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: [balancer INFO root] Starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_09:54:16 Oct 5 05:54:16 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 05:54:16 localhost ceph-mgr[301363]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Oct 5 05:54:16 localhost podman[306863]: 2025-10-05 09:54:16.455852439 +0000 UTC m=+0.086583535 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS) Oct 5 05:54:16 localhost ceph-mgr[301363]: [cephadm WARNING root] removing stray HostCache host record np0005471146.localdomain.devices.0 Oct 5 05:54:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005471146.localdomain.devices.0 Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: cephadm Oct 5 05:54:16 localhost ceph-mgr[301363]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: crash Oct 5 05:54:16 localhost podman[306864]: 2025-10-05 09:54:16.512583059 +0000 UTC m=+0.136161152 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:54:16 localhost ceph-mgr[301363]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: devicehealth Oct 5 05:54:16 localhost ceph-mgr[301363]: [devicehealth INFO root] Starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: iostat Oct 5 05:54:16 localhost ceph-mgr[301363]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: nfs Oct 5 05:54:16 localhost ceph-mgr[301363]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: orchestrator Oct 5 05:54:16 localhost ceph-mgr[301363]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: pg_autoscaler Oct 5 05:54:16 localhost podman[306864]: 2025-10-05 09:54:16.525110226 +0000 UTC m=+0.148688369 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:54:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 05:54:16 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:54:16 localhost ceph-mgr[301363]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: progress Oct 5 05:54:16 localhost podman[306863]: 2025-10-05 09:54:16.541439877 +0000 UTC m=+0.172170983 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:54:16 localhost ceph-mgr[301363]: [progress INFO root] Loading... Oct 5 05:54:16 localhost ceph-mgr[301363]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: [progress INFO root] Loaded OSDMap, ready. Oct 5 05:54:16 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] recovery thread starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] starting setup Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: rbd_support Oct 5 05:54:16 localhost ceph-mgr[301363]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: restful Oct 5 05:54:16 localhost ceph-mgr[301363]: [restful INFO root] server_addr: :: server_port: 8003 Oct 5 05:54:16 localhost ceph-mgr[301363]: [restful WARNING root] server not running: no certificate configured Oct 5 05:54:16 localhost ceph-mgr[301363]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: status Oct 5 05:54:16 localhost ceph-mgr[301363]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: telemetry Oct 5 05:54:16 localhost ceph-mgr[301363]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 05:54:16 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 05:54:16 localhost ceph-mgr[301363]: mgr load Constructed class from module: volumes Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.615+0000 7f7004489640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.615+0000 7f7004489640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.615+0000 7f7004489640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.615+0000 7f7004489640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.615+0000 7f7004489640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.617+0000 7f7001483640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.617+0000 7f7001483640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.617+0000 7f7001483640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.617+0000 7f7001483640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:54:16.617+0000 7f7001483640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] PerfHandler: starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: vms, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: volumes, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: images, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: backups, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] TaskHandler: starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Oct 5 05:54:16 localhost ceph-mgr[301363]: [rbd_support INFO root] setup complete Oct 5 05:54:16 localhost openstack_network_exporter[250246]: ERROR 09:54:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:16 localhost openstack_network_exporter[250246]: ERROR 09:54:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:16 localhost openstack_network_exporter[250246]: ERROR 09:54:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:54:16 localhost openstack_network_exporter[250246]: ERROR 09:54:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:54:16 localhost openstack_network_exporter[250246]: Oct 5 05:54:16 localhost openstack_network_exporter[250246]: ERROR 09:54:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:54:16 localhost openstack_network_exporter[250246]: Oct 5 05:54:16 localhost sshd[307044]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:54:16 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:54:16 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17391 172.18.0.107:0/2694972464' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='client.? 172.18.0.200:0/496180965' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: Activating manager daemon np0005471152.kbhlus Oct 5 05:54:16 localhost ceph-mon[302793]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:54:16 localhost ceph-mon[302793]: Manager daemon np0005471152.kbhlus is now available Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"}]': finished Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471146.localdomain.devices.0"}]': finished Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:54:16 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:54:16 localhost systemd-logind[760]: New session 69 of user ceph-admin. Oct 5 05:54:16 localhost systemd[1]: Started Session 69 of User ceph-admin. Oct 5 05:54:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:17 localhost ceph-mon[302793]: removing stray HostCache host record np0005471146.localdomain.devices.0 Oct 5 05:54:17 localhost podman[307154]: 2025-10-05 09:54:17.934395607 +0000 UTC m=+0.090252114 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, release=553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git) Oct 5 05:54:17 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:54:17] ENGINE Bus STARTING Oct 5 05:54:17 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:54:17] ENGINE Bus STARTING Oct 5 05:54:18 localhost podman[307154]: 2025-10-05 09:54:18.06435134 +0000 UTC m=+0.220207887 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, maintainer=Guillaume Abrioux , release=553, io.buildah.version=1.33.12, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:54:18 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:54:18] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:54:18 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:54:18] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:54:18 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:54:18] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:54:18 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:54:18] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:54:18 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:54:18] ENGINE Bus STARTED Oct 5 05:54:18 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:54:18] ENGINE Bus STARTED Oct 5 05:54:18 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:54:18] ENGINE Client ('172.18.0.108', 42012) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:54:18 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:54:18] ENGINE Client ('172.18.0.108', 42012) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:54:18 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:18 localhost ceph-mgr[301363]: [devicehealth INFO root] Check health Oct 5 05:54:18 localhost ceph-mon[302793]: [05/Oct/2025:09:54:17] ENGINE Bus STARTING Oct 5 05:54:18 localhost ceph-mon[302793]: [05/Oct/2025:09:54:18] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:18 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:19 localhost ceph-mon[302793]: [05/Oct/2025:09:54:18] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:54:19 localhost ceph-mon[302793]: [05/Oct/2025:09:54:18] ENGINE Bus STARTED Oct 5 05:54:19 localhost ceph-mon[302793]: [05/Oct/2025:09:54:18] ENGINE Client ('172.18.0.108', 42012) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:54:20.392 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:54:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:54:20.393 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:54:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:54:20.393 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:54:20 localhost podman[307446]: 2025-10-05 09:54:20.639842599 +0000 UTC m=+0.099646008 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7) Oct 5 05:54:20 localhost podman[307446]: 2025-10-05 09:54:20.658141271 +0000 UTC m=+0.117944690 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal) Oct 5 05:54:20 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd/host:np0005471147", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:54:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:21 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mgr.np0005471151.jecxod 172.18.0.107:0/1912291206; not ready for session (expect reconnect) Oct 5 05:54:21 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471147.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471147.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:21 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:54:21 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:21 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:54:21 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:21 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:54:21 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:54:21 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:21 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:22 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:54:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 0 B/s wr, 19 op/s Oct 5 05:54:23 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev a84e79f2-7fd6-4add-b46e-626aa560bce1 (Updating node-proxy deployment (+5 -> 5)) Oct 5 05:54:23 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev a84e79f2-7fd6-4add-b46e-626aa560bce1 (Updating node-proxy deployment (+5 -> 5)) Oct 5 05:54:23 localhost ceph-mgr[301363]: [progress INFO root] Completed event a84e79f2-7fd6-4add-b46e-626aa560bce1 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Oct 5 05:54:23 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:23 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:23 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:23 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:24 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:24 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:24 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:24 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:24 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:24 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:54:24 localhost systemd[1]: tmp-crun.iy2ymb.mount: Deactivated successfully. Oct 5 05:54:24 localhost podman[308105]: 2025-10-05 09:54:24.916599658 +0000 UTC m=+0.083724677 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:54:24 localhost podman[308105]: 2025-10-05 09:54:24.92815977 +0000 UTC m=+0.095284729 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:54:24 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:54:25 localhost ceph-mon[302793]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:25 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:25 localhost ceph-mon[302793]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 5 05:54:25 localhost ceph-mon[302793]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 5 05:54:25 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:25 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:25 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:25 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 0 B/s wr, 14 op/s Oct 5 05:54:25 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:25 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:25 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:25 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:26 localhost podman[248157]: time="2025-10-05T09:54:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:54:26 localhost podman[248157]: @ - - [05/Oct/2025:09:54:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:54:26 localhost podman[248157]: @ - - [05/Oct/2025:09:54:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19303 "" "Go-http-client/1.1" Oct 5 05:54:26 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:26 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:26 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:26 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:26 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:26 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:26 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:26 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:54:26 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:26 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:26 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:26 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.792711) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066792760, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1233, "num_deletes": 257, "total_data_size": 5358192, "memory_usage": 5746672, "flush_reason": "Manual Compaction"} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066827468, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 3307863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13871, "largest_seqno": 15099, "table_properties": {"data_size": 3302081, "index_size": 2993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14128, "raw_average_key_size": 20, "raw_value_size": 3289681, "raw_average_value_size": 4880, "num_data_blocks": 125, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658051, "oldest_key_time": 1759658051, "file_creation_time": 1759658066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 34828 microseconds, and 7437 cpu microseconds. Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.827533) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 3307863 bytes OK Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.827562) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.829476) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.829504) EVENT_LOG_v1 {"time_micros": 1759658066829497, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.829529) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5351668, prev total WAL file size 5351668, number of live WAL files 2. Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.831122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303038' seq:72057594037927935, type:22 .. '6B760031323633' seq:0, type:0; will stop at (end) Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(3230KB)], [18(12MB)] Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066831172, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 16745197, "oldest_snapshot_seqno": -1} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10127 keys, 15700877 bytes, temperature: kUnknown Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066978385, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 15700877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15640702, "index_size": 33770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 271848, "raw_average_key_size": 26, "raw_value_size": 15465118, "raw_average_value_size": 1527, "num_data_blocks": 1283, "num_entries": 10127, "num_filter_entries": 10127, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759658066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.978759) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 15700877 bytes Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.980396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.7 rd, 106.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 12.8 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(9.8) write-amplify(4.7) OK, records in: 10660, records dropped: 533 output_compression: NoCompression Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.980428) EVENT_LOG_v1 {"time_micros": 1759658066980415, "job": 8, "event": "compaction_finished", "compaction_time_micros": 147336, "compaction_time_cpu_micros": 41254, "output_level": 6, "num_output_files": 1, "total_output_size": 15700877, "num_input_records": 10660, "num_output_records": 10127, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066981068, "job": 8, "event": "table_file_deletion", "file_number": 20} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658066982760, "job": 8, "event": "table_file_deletion", "file_number": 18} Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.831006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:26 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:54:26.982876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:54:27 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:27 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:27 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:27 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:27 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:27 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:27 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s Oct 5 05:54:27 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:27 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:54:27 localhost podman[308125]: 2025-10-05 09:54:27.920750492 +0000 UTC m=+0.088840436 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:54:27 localhost podman[308125]: 2025-10-05 09:54:27.97147059 +0000 UTC m=+0.139560534 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:54:27 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:54:28 localhost podman[308148]: 2025-10-05 09:54:28.061850616 +0000 UTC m=+0.081365855 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 05:54:28 localhost podman[308148]: 2025-10-05 09:54:28.077260722 +0000 UTC m=+0.096775991 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 05:54:28 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:54:28 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:28 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:28 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:28 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:28 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:54:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.27341 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:54:28 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:28 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:29 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:29 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:29 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:54:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 5 05:54:29 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:29 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:30 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:30 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:30 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:54:30 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:30 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34302 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:54:30 localhost ceph-mgr[301363]: [cephadm INFO root] Saving service mon spec with placement label:mon Oct 5 05:54:30 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Oct 5 05:54:31 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:31 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:54:31 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:31 localhost ceph-mon[302793]: Saving service mon spec with placement label:mon Oct 5 05:54:31 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 5 05:54:31 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:31 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:54:31 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:54:31 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:54:31 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:54:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.27356 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005471150", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:54:32 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:54:32 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:54:32 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:54:32 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[302793]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:32 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:32 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:33 localhost podman[308220]: Oct 5 05:54:33 localhost podman[308220]: 2025-10-05 09:54:33.318544463 +0000 UTC m=+0.079007030 container create 259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_pasteur, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, GIT_CLEAN=True) Oct 5 05:54:33 localhost systemd[1]: Started libpod-conmon-259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246.scope. Oct 5 05:54:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 5 05:54:33 localhost systemd[1]: Started libcrun container. Oct 5 05:54:33 localhost podman[308220]: 2025-10-05 09:54:33.284428794 +0000 UTC m=+0.044891411 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:33 localhost podman[308220]: 2025-10-05 09:54:33.395927288 +0000 UTC m=+0.156389855 container init 259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_pasteur, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Oct 5 05:54:33 localhost podman[308220]: 2025-10-05 09:54:33.405946299 +0000 UTC m=+0.166408866 container start 259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_pasteur, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, version=7, io.openshift.expose-services=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.component=rhceph-container) Oct 5 05:54:33 localhost podman[308220]: 2025-10-05 09:54:33.406310589 +0000 UTC m=+0.166773206 container attach 259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_pasteur, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, release=553, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Oct 5 05:54:33 localhost systemd[1]: libpod-259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246.scope: Deactivated successfully. Oct 5 05:54:33 localhost clever_pasteur[308235]: 167 167 Oct 5 05:54:33 localhost podman[308220]: 2025-10-05 09:54:33.411591681 +0000 UTC m=+0.172054318 container died 259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_pasteur, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True) Oct 5 05:54:33 localhost podman[308240]: 2025-10-05 09:54:33.516923781 +0000 UTC m=+0.096431081 container remove 259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_pasteur, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container) Oct 5 05:54:33 localhost systemd[1]: libpod-conmon-259751bf8f66497355a6640423c1cb1b1eaab9fad4c71b991c1a02b187854246.scope: Deactivated successfully. Oct 5 05:54:33 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 5 05:54:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 5 05:54:33 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:54:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:54:33 localhost ceph-mon[302793]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:54:33 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:54:33 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:33 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:33 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:54:33 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Oct 5 05:54:33 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1804835861' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Oct 5 05:54:34 localhost podman[308310]: Oct 5 05:54:34 localhost podman[308310]: 2025-10-05 09:54:34.257055043 +0000 UTC m=+0.077988504 container create 1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_sammet, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Oct 5 05:54:34 localhost systemd[1]: Started libpod-conmon-1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6.scope. Oct 5 05:54:34 localhost systemd[1]: Started libcrun container. Oct 5 05:54:34 localhost podman[308310]: 2025-10-05 09:54:34.318978412 +0000 UTC m=+0.139911883 container init 1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_sammet, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, architecture=x86_64, release=553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph) Oct 5 05:54:34 localhost podman[308310]: 2025-10-05 09:54:34.226441788 +0000 UTC m=+0.047375329 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:34 localhost podman[308310]: 2025-10-05 09:54:34.327662906 +0000 UTC m=+0.148596367 container start 1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_sammet, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Oct 5 05:54:34 localhost systemd[1]: var-lib-containers-storage-overlay-438a6cf05379c399a685ad2446ffd52116414f50b7756652da5a11d08b2aef9f-merged.mount: Deactivated successfully. Oct 5 05:54:34 localhost podman[308310]: 2025-10-05 09:54:34.32816636 +0000 UTC m=+0.149099871 container attach 1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_sammet, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, release=553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, vcs-type=git, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:34 localhost goofy_sammet[308325]: 167 167 Oct 5 05:54:34 localhost systemd[1]: libpod-1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6.scope: Deactivated successfully. Oct 5 05:54:34 localhost podman[308310]: 2025-10-05 09:54:34.332936628 +0000 UTC m=+0.153870129 container died 1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_sammet, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, version=7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:54:34 localhost systemd[1]: var-lib-containers-storage-overlay-01a0eaecf5b6601ac8f2ae14c8846f8ae882709451eddc85b517fd08860690ef-merged.mount: Deactivated successfully. Oct 5 05:54:34 localhost podman[308331]: 2025-10-05 09:54:34.432677747 +0000 UTC m=+0.087181872 container remove 1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_sammet, GIT_CLEAN=True, RELEASE=main, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhceph ceph) Oct 5 05:54:34 localhost systemd[1]: libpod-conmon-1943bfead3a5d3769073cd017a8d08057de7e66b2e1a52b7583e8ecaebb28fb6.scope: Deactivated successfully. Oct 5 05:54:34 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 5 05:54:34 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 5 05:54:34 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:54:34 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:54:34 localhost ceph-mon[302793]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:54:34 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:54:34 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:34 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:54:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:35 localhost podman[308405]: 2025-10-05 09:54:35.381572656 +0000 UTC m=+0.104964510 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:54:35 localhost podman[308421]: Oct 5 05:54:35 localhost podman[308421]: 2025-10-05 09:54:35.395621695 +0000 UTC m=+0.085095824 container create 5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_cray, release=553, io.buildah.version=1.33.12, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Oct 5 05:54:35 localhost systemd[1]: Started libpod-conmon-5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe.scope. Oct 5 05:54:35 localhost systemd[1]: Started libcrun container. Oct 5 05:54:35 localhost podman[308421]: 2025-10-05 09:54:35.362577144 +0000 UTC m=+0.052051323 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:35 localhost podman[308405]: 2025-10-05 09:54:35.46814752 +0000 UTC m=+0.191539364 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:54:35 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:54:35 localhost podman[308406]: 2025-10-05 09:54:35.46774777 +0000 UTC m=+0.190010024 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:54:35 localhost podman[308421]: 2025-10-05 09:54:35.525214088 +0000 UTC m=+0.214688227 container init 5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_cray, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:54:35 localhost podman[308421]: 2025-10-05 09:54:35.536551844 +0000 UTC m=+0.226025973 container start 5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_cray, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Oct 5 05:54:35 localhost podman[308421]: 2025-10-05 09:54:35.536889444 +0000 UTC m=+0.226363623 container attach 5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_cray, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=553, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55) Oct 5 05:54:35 localhost peaceful_cray[308453]: 167 167 Oct 5 05:54:35 localhost systemd[1]: libpod-5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe.scope: Deactivated successfully. Oct 5 05:54:35 localhost podman[308421]: 2025-10-05 09:54:35.541360104 +0000 UTC m=+0.230834273 container died 5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_cray, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553) Oct 5 05:54:35 localhost podman[308406]: 2025-10-05 09:54:35.548316812 +0000 UTC m=+0.270579036 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 05:54:35 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:54:35 localhost podman[308470]: 2025-10-05 09:54:35.636202681 +0000 UTC m=+0.084624222 container remove 5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_cray, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., version=7, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Oct 5 05:54:35 localhost systemd[1]: libpod-conmon-5b1d8bdc1c8d9a075f319b43f0725aa8851bc04335812e77f31be79a2703ddbe.scope: Deactivated successfully. Oct 5 05:54:35 localhost ceph-mon[302793]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:54:35 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:54:35 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:54:35 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:54:35 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:54:35 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:54:36 localhost systemd[1]: var-lib-containers-storage-overlay-78dd8e04a5d35b7704922f7fc2de1a30b98eaa41e1b049f3caa92af4b0fc5a32-merged.mount: Deactivated successfully. Oct 5 05:54:36 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:36 localhost podman[308547]: Oct 5 05:54:36 localhost podman[308547]: 2025-10-05 09:54:36.539645816 +0000 UTC m=+0.077582443 container create 84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_bartik, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, version=7, io.openshift.expose-services=, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Oct 5 05:54:36 localhost systemd[1]: Started libpod-conmon-84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8.scope. Oct 5 05:54:36 localhost systemd[1]: Started libcrun container. Oct 5 05:54:36 localhost podman[308547]: 2025-10-05 09:54:36.60512105 +0000 UTC m=+0.143057677 container init 84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_bartik, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, version=7, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55) Oct 5 05:54:36 localhost podman[308547]: 2025-10-05 09:54:36.508584167 +0000 UTC m=+0.046520824 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:36 localhost podman[308547]: 2025-10-05 09:54:36.619933129 +0000 UTC m=+0.157869756 container start 84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_bartik, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.33.12) Oct 5 05:54:36 localhost podman[308547]: 2025-10-05 09:54:36.620380061 +0000 UTC m=+0.158316748 container attach 84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_bartik, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:36 localhost clever_bartik[308562]: 167 167 Oct 5 05:54:36 localhost systemd[1]: libpod-84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8.scope: Deactivated successfully. Oct 5 05:54:36 localhost podman[308547]: 2025-10-05 09:54:36.62591596 +0000 UTC m=+0.163852597 container died 84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_bartik, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12) Oct 5 05:54:36 localhost podman[308567]: 2025-10-05 09:54:36.728672921 +0000 UTC m=+0.089694129 container remove 84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_bartik, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=553) Oct 5 05:54:36 localhost systemd[1]: libpod-conmon-84c85e56dd374000605d4e60920f79bb5e26e331dff2d02de7945796083353d8.scope: Deactivated successfully. Oct 5 05:54:36 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:36 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:54:36 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:36 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:36 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:54:36 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:54:36 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:54:36 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:54:36 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:54:37 localhost systemd[1]: tmp-crun.w0rag7.mount: Deactivated successfully. Oct 5 05:54:37 localhost systemd[1]: var-lib-containers-storage-overlay-b9658a8a9e4c79f70cf29db00a991807bc2597b3f555e4f7227e8919b104dbe6-merged.mount: Deactivated successfully. Oct 5 05:54:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:37 localhost podman[308637]: Oct 5 05:54:37 localhost podman[308637]: 2025-10-05 09:54:37.548965054 +0000 UTC m=+0.080341037 container create d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_colden, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Oct 5 05:54:37 localhost systemd[1]: Started libpod-conmon-d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8.scope. Oct 5 05:54:37 localhost systemd[1]: Started libcrun container. Oct 5 05:54:37 localhost podman[308637]: 2025-10-05 09:54:37.617255334 +0000 UTC m=+0.148631317 container init d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_colden, release=553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:37 localhost podman[308637]: 2025-10-05 09:54:37.518518603 +0000 UTC m=+0.049894626 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:37 localhost heuristic_colden[308652]: 167 167 Oct 5 05:54:37 localhost systemd[1]: libpod-d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8.scope: Deactivated successfully. Oct 5 05:54:37 localhost podman[308637]: 2025-10-05 09:54:37.630950734 +0000 UTC m=+0.162326757 container start d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_colden, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vendor=Red Hat, Inc., version=7, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main) Oct 5 05:54:37 localhost podman[308637]: 2025-10-05 09:54:37.631846547 +0000 UTC m=+0.163222530 container attach d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_colden, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux , ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc.) Oct 5 05:54:37 localhost podman[308637]: 2025-10-05 09:54:37.635418624 +0000 UTC m=+0.166794637 container died d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_colden, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, vcs-type=git, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.33.12) Oct 5 05:54:37 localhost podman[308657]: 2025-10-05 09:54:37.729572562 +0000 UTC m=+0.087530640 container remove d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_colden, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.33.12, name=rhceph, distribution-scope=public, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph) Oct 5 05:54:37 localhost systemd[1]: libpod-conmon-d12abf04cd51061cfff94903ce1e757b3bebbc8aa6b6a2c6b476da64d705eec8.scope: Deactivated successfully. Oct 5 05:54:37 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:54:37 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:54:37 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:54:37 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:37 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:37 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:38 localhost systemd[1]: tmp-crun.xoPLUe.mount: Deactivated successfully. Oct 5 05:54:38 localhost systemd[1]: var-lib-containers-storage-overlay-f4d5026159aa97201854493f9c762100b4af954ba08ada35d82bbfe36fc89779-merged.mount: Deactivated successfully. Oct 5 05:54:38 localhost podman[308727]: Oct 5 05:54:38 localhost podman[308727]: 2025-10-05 09:54:38.457888465 +0000 UTC m=+0.078577909 container create d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_kepler, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=553) Oct 5 05:54:38 localhost systemd[1]: Started libpod-conmon-d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64.scope. Oct 5 05:54:38 localhost systemd[1]: Started libcrun container. Oct 5 05:54:38 localhost podman[308727]: 2025-10-05 09:54:38.521968653 +0000 UTC m=+0.142658087 container init d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_kepler, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.expose-services=, release=553, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc.) Oct 5 05:54:38 localhost podman[308727]: 2025-10-05 09:54:38.42689069 +0000 UTC m=+0.047580164 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:54:38 localhost podman[308727]: 2025-10-05 09:54:38.530360659 +0000 UTC m=+0.151050093 container start d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_kepler, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=553, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Oct 5 05:54:38 localhost podman[308727]: 2025-10-05 09:54:38.530574585 +0000 UTC m=+0.151264019 container attach d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_kepler, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:54:38 localhost jovial_kepler[308742]: 167 167 Oct 5 05:54:38 localhost systemd[1]: libpod-d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64.scope: Deactivated successfully. Oct 5 05:54:38 localhost podman[308727]: 2025-10-05 09:54:38.538013446 +0000 UTC m=+0.158702880 container died d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_kepler, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, ceph=True, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12) Oct 5 05:54:38 localhost podman[308747]: 2025-10-05 09:54:38.633854969 +0000 UTC m=+0.085749103 container remove d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_kepler, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Oct 5 05:54:38 localhost systemd[1]: libpod-conmon-d908b3f8bd3d52406ce93fe2b880b3468ca0a7bba41f934eecec6afac9b90e64.scope: Deactivated successfully. Oct 5 05:54:38 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev a7005f39-73a7-4467-bed6-0ba9a4de9f28 (Updating node-proxy deployment (+5 -> 5)) Oct 5 05:54:38 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev a7005f39-73a7-4467-bed6-0ba9a4de9f28 (Updating node-proxy deployment (+5 -> 5)) Oct 5 05:54:38 localhost ceph-mgr[301363]: [progress INFO root] Completed event a7005f39-73a7-4467-bed6-0ba9a4de9f28 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:54:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:54:38 localhost ceph-mon[302793]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:54:38 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:54:38 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:38 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:38 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:39 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:54:39 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:54:39 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:54:39 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:54:39 localhost systemd[1]: var-lib-containers-storage-overlay-18f3686ef70c02728ceff7d551469c066edff3d4b8a786d39a86ef46380445b4-merged.mount: Deactivated successfully. Oct 5 05:54:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:39 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:40 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:40 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:40 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:40 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:40 localhost ceph-mon[302793]: Reconfiguring mon.np0005471147 (monmap changed)... Oct 5 05:54:40 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471147 on np0005471147.localdomain Oct 5 05:54:40 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:40 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:40 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:41 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:41 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:41 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:41 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34322 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005471147", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:54:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:41 localhost ceph-mon[302793]: mon.np0005471152@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:41 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:54:42 localhost ceph-mon[302793]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:42 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:42 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:42 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:42 localhost ceph-mon[302793]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:42 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:42 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:42 localhost ceph-mon[302793]: from='mgr.17403 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.27381 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005471147"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:54:42 localhost ceph-mgr[301363]: [cephadm INFO root] Remove daemons mon.np0005471147 Oct 5 05:54:42 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005471147 Oct 5 05:54:42 localhost ceph-mgr[301363]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005471147: new quorum should be ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150']) Oct 5 05:54:42 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005471147: new quorum should be ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150']) Oct 5 05:54:42 localhost ceph-mgr[301363]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005471147 from monmap... Oct 5 05:54:42 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removing monitor np0005471147 from monmap... Oct 5 05:54:42 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005471147 from np0005471147.localdomain -- ports [] Oct 5 05:54:42 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005471147 from np0005471147.localdomain -- ports [] Oct 5 05:54:42 localhost ceph-mon[302793]: mon.np0005471152@2(peon) e10 my rank is now 1 (was 2) Oct 5 05:54:42 localhost ceph-mgr[301363]: client.34287 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 5 05:54:42 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:54:42 localhost ceph-mon[302793]: paxos.1).electionLogic(36) init, last seen epoch 36 Oct 5 05:54:42 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:54:44 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:44 localhost ceph-mon[302793]: Remove daemons mon.np0005471147 Oct 5 05:54:44 localhost ceph-mon[302793]: Safe to remove mon.np0005471147: new quorum should be ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471151', 'np0005471150']) Oct 5 05:54:44 localhost ceph-mon[302793]: Removing monitor np0005471147 from monmap... Oct 5 05:54:44 localhost ceph-mon[302793]: Removing daemon mon.np0005471147 from np0005471147.localdomain -- ports [] Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:54:44 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471151,np0005471150 in quorum (ranks 0,1,2,3) Oct 5 05:54:44 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:54:44 localhost ceph-mon[302793]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[302793]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[302793]: stray daemon mgr.np0005471146.xqzesq on host np0005471146.localdomain not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[302793]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:54:44 localhost ceph-mon[302793]: stray host np0005471146.localdomain has 1 stray daemons: ['mgr.np0005471146.xqzesq'] Oct 5 05:54:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:45 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:45 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:45 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:54:46 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34329 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005471147.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:54:46 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 8886d5df-44fe-4472-9324-550d04b86a40 (Updating node-proxy deployment (+5 -> 5)) Oct 5 05:54:46 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 8886d5df-44fe-4472-9324-550d04b86a40 (Updating node-proxy deployment (+5 -> 5)) Oct 5 05:54:46 localhost ceph-mgr[301363]: [progress INFO root] Completed event 8886d5df-44fe-4472-9324-550d04b86a40 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Oct 5 05:54:46 localhost ceph-mgr[301363]: [cephadm INFO root] Removed label mon from host np0005471147.localdomain Oct 5 05:54:46 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removed label mon from host np0005471147.localdomain Oct 5 05:54:46 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:54:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:54:46 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:54:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:54:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:54:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:54:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:54:46 localhost openstack_network_exporter[250246]: ERROR 09:54:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:46 localhost openstack_network_exporter[250246]: ERROR 09:54:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:54:46 localhost openstack_network_exporter[250246]: ERROR 09:54:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:54:46 localhost openstack_network_exporter[250246]: ERROR 09:54:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:54:46 localhost openstack_network_exporter[250246]: Oct 5 05:54:46 localhost openstack_network_exporter[250246]: ERROR 09:54:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:54:46 localhost openstack_network_exporter[250246]: Oct 5 05:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:54:46 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:46 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:46 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:46 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:46 localhost podman[309119]: 2025-10-05 09:54:46.884961438 +0000 UTC m=+0.095035142 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:54:46 localhost podman[309121]: 2025-10-05 09:54:46.932808078 +0000 UTC m=+0.137614241 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:54:46 localhost podman[309121]: 2025-10-05 09:54:46.945109369 +0000 UTC m=+0.149915532 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:54:46 localhost podman[309119]: 2025-10-05 09:54:46.952152879 +0000 UTC m=+0.162226563 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Oct 5 05:54:46 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:54:46 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:54:47 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[302793]: Updating np0005471147.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471147.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34337 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005471147.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:54:47 localhost ceph-mgr[301363]: [cephadm INFO root] Removed label mgr from host np0005471147.localdomain Oct 5 05:54:47 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005471147.localdomain Oct 5 05:54:47 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:47 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:47 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:47 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:48 localhost ceph-mon[302793]: Removed label mon from host np0005471147.localdomain Oct 5 05:54:48 localhost ceph-mon[302793]: Reconfiguring crash.np0005471147 (monmap changed)... Oct 5 05:54:48 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471147 on np0005471147.localdomain Oct 5 05:54:48 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:48 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:48 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:48 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471147.mwpyfl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:48 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:48 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:48 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:48 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:49 localhost ceph-mon[302793]: Removed label mgr from host np0005471147.localdomain Oct 5 05:54:49 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471147.mwpyfl (monmap changed)... Oct 5 05:54:49 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471147.mwpyfl on np0005471147.localdomain Oct 5 05:54:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:49 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34341 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005471147.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:54:49 localhost ceph-mgr[301363]: [cephadm INFO root] Removed label _admin from host np0005471147.localdomain Oct 5 05:54:49 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005471147.localdomain Oct 5 05:54:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:49 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:49 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:49 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:49 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:50 localhost ceph-mon[302793]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:54:50 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:54:50 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:50 localhost ceph-mon[302793]: Removed label _admin from host np0005471147.localdomain Oct 5 05:54:50 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:50 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:50 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:50 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:50 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:50 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:50 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:54:50 localhost systemd[1]: tmp-crun.vkX9i9.mount: Deactivated successfully. Oct 5 05:54:50 localhost podman[309160]: 2025-10-05 09:54:50.915432868 +0000 UTC m=+0.083739259 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, release=1755695350) Oct 5 05:54:50 localhost podman[309160]: 2025-10-05 09:54:50.933195467 +0000 UTC m=+0.101501778 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:54:50 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:54:51 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:54:51 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:54:51 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:51 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:51 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:51 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:54:51 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:54:51 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:54:51 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:54:52 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:54:52 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:54:52 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:52 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:52 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:54:52 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:52 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:54:52 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 5 05:54:52 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 5 05:54:52 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:52 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:53 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:53 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:53 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:54:53 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:54:53 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:54:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:53 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 5 05:54:53 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 5 05:54:53 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:53 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:54 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:54 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:54 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:54:54 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:54:54 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:54:54 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:54:54 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:54:54 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:54:54 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:54:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:54:55 localhost systemd[1]: tmp-crun.5OWhMc.mount: Deactivated successfully. Oct 5 05:54:55 localhost podman[309181]: 2025-10-05 09:54:55.140341081 +0000 UTC m=+0.083018868 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 05:54:55 localhost podman[309181]: 2025-10-05 09:54:55.148121861 +0000 UTC m=+0.090799628 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:54:55 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:54:55 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:54:55 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:54:55 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:54:55 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:54:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:55 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:54:55 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:54:55 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:54:55 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:55 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:54:56 localhost podman[248157]: time="2025-10-05T09:54:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:54:56 localhost podman[248157]: @ - - [05/Oct/2025:09:54:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:54:56 localhost podman[248157]: @ - - [05/Oct/2025:09:54:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19311 "" "Go-http-client/1.1" Oct 5 05:54:56 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:56 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:56 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:56 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:56 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:54:56 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:54:56 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:54:56 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:56 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:56 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:54:57 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:54:57 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:54:57 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:54:57 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:54:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:57 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 5 05:54:57 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 5 05:54:57 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:57 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:58 localhost ceph-mon[302793]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:54:58 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:54:58 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:54:58 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:54:58 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:54:58 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:58 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:54:58 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Oct 5 05:54:58 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Oct 5 05:54:58 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:58 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:54:58 localhost podman[309198]: 2025-10-05 09:54:58.91924729 +0000 UTC m=+0.089760081 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:54:58 localhost podman[309198]: 2025-10-05 09:54:58.935149779 +0000 UTC m=+0.105662540 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd) Oct 5 05:54:58 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:54:59 localhost podman[309199]: 2025-10-05 09:54:59.022535785 +0000 UTC m=+0.189409657 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:54:59 localhost podman[309199]: 2025-10-05 09:54:59.064049864 +0000 UTC m=+0.230923666 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:54:59 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:54:59 localhost ceph-mon[302793]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:54:59 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:54:59 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:59 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:54:59 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:54:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:54:59 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:54:59 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:54:59 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:54:59 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-mon[302793]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:55:00 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:00 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:00 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:55:00 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:00 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:00 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:00 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:00 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34349 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005471147.localdomain", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:55:00 localhost ceph-mgr[301363]: [cephadm INFO root] Added label _no_schedule to host np0005471147.localdomain Oct 5 05:55:00 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005471147.localdomain Oct 5 05:55:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:55:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5073 writes, 22K keys, 5073 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5073 writes, 741 syncs, 6.85 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 161 writes, 474 keys, 161 commit groups, 1.0 writes per commit group, ingest: 0.72 MB, 0.00 MB/s#012Interval WAL: 161 writes, 68 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:55:00 localhost ceph-mgr[301363]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471147.localdomain Oct 5 05:55:00 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471147.localdomain Oct 5 05:55:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:01 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:55:01 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:55:01 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:01 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:01 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[302793]: Added label _no_schedule to host np0005471147.localdomain Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[302793]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471147.localdomain Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:01 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:55:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.44195 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005471147.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.499054) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102499118, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1828, "num_deletes": 252, "total_data_size": 4023041, "memory_usage": 4067896, "flush_reason": "Manual Compaction"} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102514652, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2331889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15104, "largest_seqno": 16927, "table_properties": {"data_size": 2323954, "index_size": 4568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 20583, "raw_average_key_size": 22, "raw_value_size": 2306611, "raw_average_value_size": 2518, "num_data_blocks": 198, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658067, "oldest_key_time": 1759658067, "file_creation_time": 1759658102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 15648 microseconds, and 6350 cpu microseconds. Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.514701) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2331889 bytes OK Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.514726) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.517170) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.517194) EVENT_LOG_v1 {"time_micros": 1759658102517187, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.517217) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4013929, prev total WAL file size 4030063, number of live WAL files 2. Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.519631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2277KB)], [21(14MB)] Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102519697, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 18032766, "oldest_snapshot_seqno": -1} Oct 5 05:55:02 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:02 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:02 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:02 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10502 keys, 14372101 bytes, temperature: kUnknown Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102628407, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14372101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14309725, "index_size": 35011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26309, "raw_key_size": 281346, "raw_average_key_size": 26, "raw_value_size": 14127932, "raw_average_value_size": 1345, "num_data_blocks": 1336, "num_entries": 10502, "num_filter_entries": 10502, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759658102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.628927) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14372101 bytes Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.631820) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 15.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(13.9) write-amplify(6.2) OK, records in: 11043, records dropped: 541 output_compression: NoCompression Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.631850) EVENT_LOG_v1 {"time_micros": 1759658102631837, "job": 10, "event": "compaction_finished", "compaction_time_micros": 108792, "compaction_time_cpu_micros": 42565, "output_level": 6, "num_output_files": 1, "total_output_size": 14372101, "num_input_records": 11043, "num_output_records": 10502, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102632305, "job": 10, "event": "table_file_deletion", "file_number": 23} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658102634628, "job": 10, "event": "table_file_deletion", "file_number": 21} Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.519542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.634716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.634723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.634728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.634731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:55:02.634735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:55:02 localhost ceph-mon[302793]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:55:02 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:02 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:02 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:02 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:03 localhost podman[309292]: Oct 5 05:55:03 localhost podman[309292]: 2025-10-05 09:55:03.09874453 +0000 UTC m=+0.057540702 container create 32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hofstadter, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, release=553, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True) Oct 5 05:55:03 localhost systemd[1]: Started libpod-conmon-32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c.scope. Oct 5 05:55:03 localhost systemd[1]: Started libcrun container. Oct 5 05:55:03 localhost podman[309292]: 2025-10-05 09:55:03.163233798 +0000 UTC m=+0.122029980 container init 32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hofstadter, name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, ceph=True, vcs-type=git) Oct 5 05:55:03 localhost podman[309292]: 2025-10-05 09:55:03.071492796 +0000 UTC m=+0.030289058 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:03 localhost podman[309292]: 2025-10-05 09:55:03.174151882 +0000 UTC m=+0.132948054 container start 32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hofstadter, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, version=7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7) Oct 5 05:55:03 localhost podman[309292]: 2025-10-05 09:55:03.174316767 +0000 UTC m=+0.133112939 container attach 32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hofstadter, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-09-24T08:57:55, ceph=True, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:55:03 localhost angry_hofstadter[309307]: 167 167 Oct 5 05:55:03 localhost systemd[1]: libpod-32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c.scope: Deactivated successfully. Oct 5 05:55:03 localhost podman[309292]: 2025-10-05 09:55:03.179693962 +0000 UTC m=+0.138490214 container died 32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hofstadter, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, ceph=True, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container) Oct 5 05:55:03 localhost podman[309312]: 2025-10-05 09:55:03.265634239 +0000 UTC m=+0.070430330 container remove 32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_hofstadter, version=7, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.33.12) Oct 5 05:55:03 localhost systemd[1]: libpod-conmon-32fb1ff5515b2641967d83a21f194e9ed6d593ba5bfc68a5b7eaaaf9774f229c.scope: Deactivated successfully. Oct 5 05:55:03 localhost nova_compute[297130]: 2025-10-05 09:55:03.275 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:03 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:03 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:03 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:03 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:03 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34357 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005471147.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:55:03 localhost ceph-mgr[301363]: [cephadm INFO root] Removed host np0005471147.localdomain Oct 5 05:55:03 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removed host np0005471147.localdomain Oct 5 05:55:03 localhost ceph-mon[302793]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:03 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:03 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:03 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:03 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:55:03 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:03 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain"} : dispatch Oct 5 05:55:03 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain"}]': finished Oct 5 05:55:03 localhost podman[309380]: Oct 5 05:55:03 localhost podman[309380]: 2025-10-05 09:55:03.966477202 +0000 UTC m=+0.079478063 container create af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_archimedes, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7) Oct 5 05:55:03 localhost systemd[1]: Started libpod-conmon-af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135.scope. Oct 5 05:55:04 localhost systemd[1]: Started libcrun container. Oct 5 05:55:04 localhost podman[309380]: 2025-10-05 09:55:04.024938018 +0000 UTC m=+0.137938839 container init af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_archimedes, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=) Oct 5 05:55:04 localhost podman[309380]: 2025-10-05 09:55:04.034019692 +0000 UTC m=+0.147020503 container start af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_archimedes, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:04 localhost podman[309380]: 2025-10-05 09:55:04.03429981 +0000 UTC m=+0.147300621 container attach af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_archimedes, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph) Oct 5 05:55:04 localhost dazzling_archimedes[309395]: 167 167 Oct 5 05:55:04 localhost systemd[1]: libpod-af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135.scope: Deactivated successfully. Oct 5 05:55:04 localhost podman[309380]: 2025-10-05 09:55:03.937153382 +0000 UTC m=+0.050154223 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:04 localhost podman[309380]: 2025-10-05 09:55:04.037100895 +0000 UTC m=+0.150101736 container died af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_archimedes, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, architecture=x86_64, build-date=2025-09-24T08:57:55) Oct 5 05:55:04 localhost systemd[1]: var-lib-containers-storage-overlay-1f65ab0ba3c582d2da63c346d29e803305f4f8c227a82f9aa32643234dd8ad0f-merged.mount: Deactivated successfully. Oct 5 05:55:04 localhost systemd[1]: var-lib-containers-storage-overlay-cf4da183859eca58d2d687952c33dacfde0582377496f4a515dad356fa89902e-merged.mount: Deactivated successfully. Oct 5 05:55:04 localhost podman[309400]: 2025-10-05 09:55:04.132888267 +0000 UTC m=+0.085575007 container remove af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_archimedes, RELEASE=main, distribution-scope=public, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55) Oct 5 05:55:04 localhost systemd[1]: libpod-conmon-af4893d48ee1a9c3ca17c3914be9843692abfddc4f6cee6e4ba98f2fe6a1b135.scope: Deactivated successfully. Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.293 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.294 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.318 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.319 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.319 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.319 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.320 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:55:04 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:04 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:04 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:04 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:55:04 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/388276169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.776 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:55:04 localhost ceph-mon[302793]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:04 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:04 localhost ceph-mon[302793]: Removed host np0005471147.localdomain Oct 5 05:55:04 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:04 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:04 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:55:04 localhost podman[309499]: Oct 5 05:55:04 localhost podman[309499]: 2025-10-05 09:55:04.941036153 +0000 UTC m=+0.086873793 container create 5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_gauss, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux ) Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.949 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.950 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11871MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.950 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.950 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:55:04 localhost systemd[1]: Started libpod-conmon-5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9.scope. Oct 5 05:55:04 localhost systemd[1]: Started libcrun container. Oct 5 05:55:04 localhost podman[309499]: 2025-10-05 09:55:04.994984207 +0000 UTC m=+0.140821877 container init 5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_gauss, ceph=True, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.998 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:55:04 localhost nova_compute[297130]: 2025-10-05 09:55:04.999 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:55:05 localhost podman[309499]: 2025-10-05 09:55:04.90570414 +0000 UTC m=+0.051541840 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:05 localhost podman[309499]: 2025-10-05 09:55:05.008085731 +0000 UTC m=+0.153923371 container start 5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_gauss, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:05 localhost podman[309499]: 2025-10-05 09:55:05.008291146 +0000 UTC m=+0.154128866 container attach 5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_gauss, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.33.12, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, ceph=True, GIT_BRANCH=main, name=rhceph) Oct 5 05:55:05 localhost systemd[1]: libpod-5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9.scope: Deactivated successfully. Oct 5 05:55:05 localhost goofy_gauss[309514]: 167 167 Oct 5 05:55:05 localhost nova_compute[297130]: 2025-10-05 09:55:05.024 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:55:05 localhost podman[309519]: 2025-10-05 09:55:05.091314525 +0000 UTC m=+0.062028634 container died 5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_gauss, release=553, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.buildah.version=1.33.12, name=rhceph, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:55:05 localhost systemd[1]: var-lib-containers-storage-overlay-02794c58309d16bdcaf76d603b9d52336912821b9698db0ba095b22aa2e56dce-merged.mount: Deactivated successfully. Oct 5 05:55:05 localhost podman[309519]: 2025-10-05 09:55:05.129261187 +0000 UTC m=+0.099975266 container remove 5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_gauss, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, release=553, RELEASE=main) Oct 5 05:55:05 localhost systemd[1]: libpod-conmon-5491809c6d1c6ae41c79d8311f28e241d1d84decf42c8234210b881c0a3deec9.scope: Deactivated successfully. Oct 5 05:55:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:55:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5762 writes, 25K keys, 5762 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5762 writes, 773 syncs, 7.45 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 77 writes, 211 keys, 77 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s#012Interval WAL: 77 writes, 38 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 05:55:05 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:05 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:05 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:05 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:55:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2109763622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:55:05 localhost nova_compute[297130]: 2025-10-05 09:55:05.453 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:55:05 localhost nova_compute[297130]: 2025-10-05 09:55:05.460 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:55:05 localhost nova_compute[297130]: 2025-10-05 09:55:05.478 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:55:05 localhost nova_compute[297130]: 2025-10-05 09:55:05.482 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:55:05 localhost nova_compute[297130]: 2025-10-05 09:55:05.482 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:55:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:55:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:55:05 localhost ceph-mon[302793]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:05 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:05 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:05 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:05 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:05 localhost systemd[1]: tmp-crun.l5oJd6.mount: Deactivated successfully. Oct 5 05:55:05 localhost podman[309614]: 2025-10-05 09:55:05.942408297 +0000 UTC m=+0.100051057 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:55:05 localhost podman[309628]: Oct 5 05:55:05 localhost podman[309614]: 2025-10-05 09:55:05.953750143 +0000 UTC m=+0.111392933 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 05:55:05 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:55:06 localhost podman[309628]: 2025-10-05 09:55:06.003849554 +0000 UTC m=+0.135191656 container create 05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_tharp, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:06 localhost podman[309628]: 2025-10-05 09:55:05.922982993 +0000 UTC m=+0.054325115 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:06 localhost systemd[1]: Started libpod-conmon-05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf.scope. Oct 5 05:55:06 localhost systemd[1]: Started libcrun container. Oct 5 05:55:06 localhost podman[309628]: 2025-10-05 09:55:06.064657693 +0000 UTC m=+0.195999785 container init 05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_tharp, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Oct 5 05:55:06 localhost competent_tharp[309661]: 167 167 Oct 5 05:55:06 localhost systemd[1]: libpod-05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf.scope: Deactivated successfully. Oct 5 05:55:06 localhost podman[309628]: 2025-10-05 09:55:06.077871179 +0000 UTC m=+0.209213281 container start 05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_tharp, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, architecture=x86_64, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:55:06 localhost podman[309628]: 2025-10-05 09:55:06.078670721 +0000 UTC m=+0.210012823 container attach 05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_tharp, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:55:06 localhost podman[309615]: 2025-10-05 09:55:06.079882743 +0000 UTC m=+0.237568364 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:55:06 localhost podman[309628]: 2025-10-05 09:55:06.134614919 +0000 UTC m=+0.265957051 container died 05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_tharp, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, distribution-scope=public, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=) Oct 5 05:55:06 localhost podman[309615]: 2025-10-05 09:55:06.168132202 +0000 UTC m=+0.325817843 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 05:55:06 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:55:06 localhost systemd[1]: var-lib-containers-storage-overlay-6aa9998bb012039fac50001d31dfbf6d1bbb73489167590de97bcf1df79f179a-merged.mount: Deactivated successfully. Oct 5 05:55:06 localhost podman[309671]: 2025-10-05 09:55:06.263352969 +0000 UTC m=+0.170761554 container remove 05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_tharp, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=) Oct 5 05:55:06 localhost systemd[1]: libpod-conmon-05cd4fa7093a9917ba66b01ad6b60d68e625a7c8990d288b858fd06ed7a2dcdf.scope: Deactivated successfully. Oct 5 05:55:06 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:06 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:06 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:06 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:06 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:06 localhost nova_compute[297130]: 2025-10-05 09:55:06.479 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:06 localhost nova_compute[297130]: 2025-10-05 09:55:06.480 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:06 localhost nova_compute[297130]: 2025-10-05 09:55:06.517 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:06 localhost nova_compute[297130]: 2025-10-05 09:55:06.517 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:06 localhost nova_compute[297130]: 2025-10-05 09:55:06.517 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:55:06 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:06 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:06 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:06 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:06 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:06 localhost podman[309747]: Oct 5 05:55:06 localhost podman[309747]: 2025-10-05 09:55:06.954973324 +0000 UTC m=+0.060787780 container create 4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_euclid, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:55:06 localhost systemd[1]: Started libpod-conmon-4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9.scope. Oct 5 05:55:07 localhost systemd[1]: Started libcrun container. Oct 5 05:55:07 localhost podman[309747]: 2025-10-05 09:55:07.018383963 +0000 UTC m=+0.124198439 container init 4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_euclid, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container) Oct 5 05:55:07 localhost podman[309747]: 2025-10-05 09:55:07.027439667 +0000 UTC m=+0.133254163 container start 4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_euclid, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, io.openshift.expose-services=) Oct 5 05:55:07 localhost podman[309747]: 2025-10-05 09:55:07.027669753 +0000 UTC m=+0.133484240 container attach 4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_euclid, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, architecture=x86_64, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True) Oct 5 05:55:07 localhost romantic_euclid[309762]: 167 167 Oct 5 05:55:07 localhost systemd[1]: libpod-4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9.scope: Deactivated successfully. Oct 5 05:55:07 localhost podman[309747]: 2025-10-05 09:55:06.929653411 +0000 UTC m=+0.035467917 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:07 localhost podman[309747]: 2025-10-05 09:55:07.030669884 +0000 UTC m=+0.136484400 container died 4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_euclid, release=553, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_BRANCH=main, version=7) Oct 5 05:55:07 localhost systemd[1]: var-lib-containers-storage-overlay-18df642bfb9c80d1c27a1908f2ed06b2a12c7c55698614d318ce48448454cfb4-merged.mount: Deactivated successfully. Oct 5 05:55:07 localhost podman[309767]: 2025-10-05 09:55:07.126201289 +0000 UTC m=+0.084471848 container remove 4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_euclid, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:55:07 localhost systemd[1]: libpod-conmon-4a9bd1d689148e7ba849132eafc75768a463e9006ba6d9a888f7d5118db358f9.scope: Deactivated successfully. Oct 5 05:55:07 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:55:07 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:55:07 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:55:07 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:55:07 localhost nova_compute[297130]: 2025-10-05 09:55:07.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:07 localhost nova_compute[297130]: 2025-10-05 09:55:07.275 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:07 localhost podman[309836]: Oct 5 05:55:07 localhost podman[309836]: 2025-10-05 09:55:07.792273165 +0000 UTC m=+0.076067761 container create 9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_franklin, release=553, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:55:07 localhost systemd[1]: Started libpod-conmon-9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7.scope. Oct 5 05:55:07 localhost systemd[1]: Started libcrun container. Oct 5 05:55:07 localhost podman[309836]: 2025-10-05 09:55:07.860622777 +0000 UTC m=+0.144417363 container init 9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_franklin, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55) Oct 5 05:55:07 localhost podman[309836]: 2025-10-05 09:55:07.761796064 +0000 UTC m=+0.045590690 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:07 localhost podman[309836]: 2025-10-05 09:55:07.871217363 +0000 UTC m=+0.155011949 container start 9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_franklin, version=7, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux ) Oct 5 05:55:07 localhost podman[309836]: 2025-10-05 09:55:07.871533432 +0000 UTC m=+0.155328058 container attach 9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_franklin, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, release=553) Oct 5 05:55:07 localhost gracious_franklin[309851]: 167 167 Oct 5 05:55:07 localhost systemd[1]: libpod-9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7.scope: Deactivated successfully. Oct 5 05:55:07 localhost podman[309836]: 2025-10-05 09:55:07.875036876 +0000 UTC m=+0.158831542 container died 9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_franklin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, name=rhceph, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:07 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:07 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:07 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:07 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:07 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:55:07 localhost podman[309856]: 2025-10-05 09:55:07.971530108 +0000 UTC m=+0.085332642 container remove 9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_franklin, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64) Oct 5 05:55:07 localhost systemd[1]: libpod-conmon-9ff174ae3d0bbc9fc6e6bb70d59e9fccdf2ec34c57eedecca9fb5b92490f6db7.scope: Deactivated successfully. Oct 5 05:55:08 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 0563127b-67e1-420d-9bb4-8fcad10a2343 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:08 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 0563127b-67e1-420d-9bb4-8fcad10a2343 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:08 localhost ceph-mgr[301363]: [progress INFO root] Completed event 0563127b-67e1-420d-9bb4-8fcad10a2343 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:55:08 localhost systemd[1]: var-lib-containers-storage-overlay-ef300b5b1345d67f6facc4916266596e52db4f5c41b67e012139e35fc2bf20ad-merged.mount: Deactivated successfully. Oct 5 05:55:08 localhost nova_compute[297130]: 2025-10-05 09:55:08.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:55:09 localhost ceph-mon[302793]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:55:09 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:55:09 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:09 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:09 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:09 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:11 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:11 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:55:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.27388 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:55:12 localhost ceph-mgr[301363]: [cephadm INFO root] Saving service mon spec with placement label:mon Oct 5 05:55:12 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Oct 5 05:55:12 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev f5eeee26-5840-4f4e-972a-f6e1ca9786f0 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:12 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev f5eeee26-5840-4f4e-972a-f6e1ca9786f0 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:12 localhost ceph-mgr[301363]: [progress INFO root] Completed event f5eeee26-5840-4f4e-972a-f6e1ca9786f0 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:55:12 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:12 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:12 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:12 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:13 localhost ceph-mon[302793]: Saving service mon spec with placement label:mon Oct 5 05:55:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34385 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005471151", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Oct 5 05:55:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.27400 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005471151"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:55:15 localhost ceph-mgr[301363]: [cephadm INFO root] Remove daemons mon.np0005471151 Oct 5 05:55:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005471151 Oct 5 05:55:15 localhost ceph-mgr[301363]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005471151: new quorum should be ['np0005471148', 'np0005471152', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471150']) Oct 5 05:55:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005471151: new quorum should be ['np0005471148', 'np0005471152', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471150']) Oct 5 05:55:15 localhost ceph-mgr[301363]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005471151 from monmap... Oct 5 05:55:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removing monitor np0005471151 from monmap... Oct 5 05:55:15 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005471151 from np0005471151.localdomain -- ports [] Oct 5 05:55:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005471151 from np0005471151.localdomain -- ports [] Oct 5 05:55:15 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:55:15 localhost ceph-mon[302793]: paxos.1).electionLogic(38) init, last seen epoch 38 Oct 5 05:55:15 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:15 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:16 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_09:55:16 Oct 5 05:55:16 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 05:55:16 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 05:55:16 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_data', '.mgr', 'vms', 'backups', 'volumes', 'manila_metadata', 'images'] Oct 5 05:55:16 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 05:55:16 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Oct 5 05:55:16 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:55:16 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:55:16 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:55:16 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:55:16 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:55:16 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:55:16 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:55:16 localhost openstack_network_exporter[250246]: ERROR 09:55:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:55:16 localhost openstack_network_exporter[250246]: ERROR 09:55:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:16 localhost openstack_network_exporter[250246]: ERROR 09:55:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:16 localhost openstack_network_exporter[250246]: ERROR 09:55:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:55:16 localhost openstack_network_exporter[250246]: Oct 5 05:55:16 localhost openstack_network_exporter[250246]: ERROR 09:55:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:55:16 localhost openstack_network_exporter[250246]: Oct 5 05:55:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:55:17 localhost podman[309909]: 2025-10-05 09:55:17.931693287 +0000 UTC m=+0.092506375 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 05:55:17 localhost podman[309910]: 2025-10-05 09:55:17.975384995 +0000 UTC m=+0.135566206 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:55:17 localhost podman[309910]: 2025-10-05 09:55:17.989124496 +0000 UTC m=+0.149305727 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:55:17 localhost podman[309909]: 2025-10-05 09:55:17.998408356 +0000 UTC m=+0.159221494 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 05:55:18 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:55:18 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:55:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:19 localhost ceph-mds[300011]: mds.beacon.mds.np0005471152.pozuqw missed beacon ack from the monitors Oct 5 05:55:20 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:55:20 localhost ceph-mon[302793]: paxos.1).electionLogic(41) init, last seen epoch 41, mid-election, bumping Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:55:20 localhost ceph-mon[302793]: Safe to remove mon.np0005471151: new quorum should be ['np0005471148', 'np0005471152', 'np0005471150'] (from ['np0005471148', 'np0005471152', 'np0005471150']) Oct 5 05:55:20 localhost ceph-mon[302793]: Removing monitor np0005471151 from monmap... Oct 5 05:55:20 localhost ceph-mon[302793]: Removing daemon mon.np0005471151 from np0005471151.localdomain -- ports [] Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471152 in quorum (ranks 0,1) Oct 5 05:55:20 localhost ceph-mon[302793]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: stray daemon mgr.np0005471146.xqzesq on host np0005471146.localdomain not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:55:20 localhost ceph-mon[302793]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: stray host np0005471146.localdomain has 1 stray daemons: ['mgr.np0005471146.xqzesq'] Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:55:20 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471150 in quorum (ranks 0,1,2) Oct 5 05:55:20 localhost ceph-mon[302793]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: stray daemon mgr.np0005471146.xqzesq on host np0005471146.localdomain not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Oct 5 05:55:20 localhost ceph-mon[302793]: stray host np0005471146.localdomain has 1 stray daemons: ['mgr.np0005471146.xqzesq'] Oct 5 05:55:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:20 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:55:20.392 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:55:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:55:20.392 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:55:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:55:20.393 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:55:21 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:21 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:21 localhost systemd[1]: tmp-crun.B3A453.mount: Deactivated successfully. Oct 5 05:55:21 localhost podman[309969]: 2025-10-05 09:55:21.340232121 +0000 UTC m=+0.097211441 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible) Oct 5 05:55:21 localhost podman[309969]: 2025-10-05 09:55:21.382242644 +0000 UTC m=+0.139221974 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64) Oct 5 05:55:21 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:21 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:21 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev b0c29761-13e9-467e-b4ec-cfac7831cfa5 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:22 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev b0c29761-13e9-467e-b4ec-cfac7831cfa5 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:22 localhost ceph-mgr[301363]: [progress INFO root] Completed event b0c29761-13e9-467e-b4ec-cfac7831cfa5 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:55:22 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:55:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:55:22 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:55:22 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:55:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:23 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:55:23 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:55:23 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:55:23 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:55:23 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:55:23 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:23 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:55:23 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:23 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:23 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:24 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:55:24 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:55:24 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:55:24 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:55:24 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:55:24 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:55:24 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:24 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:24 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:25 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:55:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:25 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Oct 5 05:55:25 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Oct 5 05:55:25 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:25 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:55:25 localhost podman[310309]: 2025-10-05 09:55:25.916127924 +0000 UTC m=+0.077873951 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Oct 5 05:55:25 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:55:25 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:55:25 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:25 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:25 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:25 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:55:25 localhost podman[310309]: 2025-10-05 09:55:25.925259691 +0000 UTC m=+0.087005738 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 05:55:25 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:55:26 localhost podman[248157]: time="2025-10-05T09:55:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:55:26 localhost podman[248157]: @ - - [05/Oct/2025:09:55:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:55:26 localhost podman[248157]: @ - - [05/Oct/2025:09:55:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19303 "" "Go-http-client/1.1" Oct 5 05:55:26 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:26 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Oct 5 05:55:26 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Oct 5 05:55:26 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:26 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:26 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:55:26 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:26 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:26 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:26 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:27 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:55:27 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:55:27 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:55:27 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:55:27 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:55:27 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:27 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:27 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:27 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34393 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005471151.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:55:28 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:28 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:28 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:55:28 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:55:28 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:55:28 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:55:28 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:55:28 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:55:28 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:28 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:55:28 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:28 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:28 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:29 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:55:29 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:55:29 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:55:29 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:55:29 localhost podman[310329]: 2025-10-05 09:55:29.903362989 +0000 UTC m=+0.070284706 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:55:29 localhost podman[310329]: 2025-10-05 09:55:29.911529419 +0000 UTC m=+0.078451166 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:55:29 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:55:29 localhost podman[310328]: 2025-10-05 09:55:29.97052734 +0000 UTC m=+0.135523305 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:55:29 localhost ceph-mon[302793]: Deploying daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:55:29 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:55:29 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:55:29 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:29 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:29 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:30 localhost podman[310328]: 2025-10-05 09:55:30.007885087 +0000 UTC m=+0.172881002 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 05:55:30 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:55:30 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:55:31 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:55:31 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:31 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:31 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:31 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Oct 5 05:55:31 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Oct 5 05:55:31 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:55:31 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:55:31 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:31 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:31 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:32 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:55:32 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:32 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:32 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Oct 5 05:55:32 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Oct 5 05:55:32 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:32 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:33 localhost ceph-mon[302793]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:55:33 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:55:33 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:33 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:33 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:55:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:33 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:33 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:33 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:33 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:55:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:55:33 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:34 localhost ceph-mon[302793]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:55:34 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:34 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:34 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:34 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:55:34 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:34 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:55:34 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:34 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:34 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:34 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:34 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:34 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:35 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:35 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:35 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:35 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:35 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:55:35 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:35 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:55:35 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:35 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:55:35 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:35 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:35 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:35 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:36 localhost podman[310422]: Oct 5 05:55:36 localhost podman[310422]: 2025-10-05 09:55:36.044095257 +0000 UTC m=+0.077813169 container create 5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_easley, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, architecture=x86_64) Oct 5 05:55:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:55:36 localhost systemd[1]: Started libpod-conmon-5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f.scope. Oct 5 05:55:36 localhost systemd[1]: Started libcrun container. Oct 5 05:55:36 localhost podman[310422]: 2025-10-05 09:55:36.012113274 +0000 UTC m=+0.045831216 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:36 localhost podman[310422]: 2025-10-05 09:55:36.116500228 +0000 UTC m=+0.150218150 container init 5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_easley, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 05:55:36 localhost podman[310422]: 2025-10-05 09:55:36.137901755 +0000 UTC m=+0.171619667 container start 5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_easley, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, version=7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64) Oct 5 05:55:36 localhost podman[310422]: 2025-10-05 09:55:36.138812319 +0000 UTC m=+0.172530271 container attach 5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_easley, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:55:36 localhost pensive_easley[310438]: 167 167 Oct 5 05:55:36 localhost systemd[1]: libpod-5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f.scope: Deactivated successfully. Oct 5 05:55:36 localhost podman[310437]: 2025-10-05 09:55:36.181310606 +0000 UTC m=+0.101718804 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:55:36 localhost podman[310422]: 2025-10-05 09:55:36.193980447 +0000 UTC m=+0.227698359 container died 5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_easley, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64) Oct 5 05:55:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:55:36 localhost podman[310437]: 2025-10-05 09:55:36.216805302 +0000 UTC m=+0.137213460 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:55:36 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:55:36 localhost podman[310453]: 2025-10-05 09:55:36.244712304 +0000 UTC m=+0.089133544 container remove 5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_easley, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:36 localhost systemd[1]: libpod-conmon-5c43111c7411706ae02e6eeb5137285623ca394794c9dc66d5dd8084fab30b5f.scope: Deactivated successfully. Oct 5 05:55:36 localhost podman[310474]: 2025-10-05 09:55:36.307016314 +0000 UTC m=+0.087124520 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 05:55:36 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:36 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:36 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:36 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:36 localhost podman[310474]: 2025-10-05 09:55:36.36807003 +0000 UTC m=+0.148178246 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller) Oct 5 05:55:36 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:55:36 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:36 localhost ceph-mon[302793]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:55:36 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:55:36 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:36 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:36 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:55:36 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:36 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:36 localhost podman[310555]: Oct 5 05:55:36 localhost podman[310555]: 2025-10-05 09:55:36.956088901 +0000 UTC m=+0.077003686 container create 31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_wing, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:36 localhost systemd[1]: Started libpod-conmon-31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8.scope. Oct 5 05:55:37 localhost systemd[1]: Started libcrun container. Oct 5 05:55:37 localhost podman[310555]: 2025-10-05 09:55:37.022246965 +0000 UTC m=+0.143161750 container init 31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_wing, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, RELEASE=main, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:55:37 localhost podman[310555]: 2025-10-05 09:55:36.926302719 +0000 UTC m=+0.047217534 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:37 localhost podman[310555]: 2025-10-05 09:55:37.03280836 +0000 UTC m=+0.153723145 container start 31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_wing, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, distribution-scope=public, version=7) Oct 5 05:55:37 localhost podman[310555]: 2025-10-05 09:55:37.033079847 +0000 UTC m=+0.153994682 container attach 31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_wing, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux , architecture=x86_64, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:37 localhost nervous_wing[310570]: 167 167 Oct 5 05:55:37 localhost systemd[1]: libpod-31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8.scope: Deactivated successfully. Oct 5 05:55:37 localhost podman[310555]: 2025-10-05 09:55:37.036559591 +0000 UTC m=+0.157474426 container died 31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_wing, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, architecture=x86_64, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55) Oct 5 05:55:37 localhost systemd[1]: tmp-crun.2GmPdA.mount: Deactivated successfully. Oct 5 05:55:37 localhost systemd[1]: var-lib-containers-storage-overlay-a02f1dfc254a1eff567d486758e5d3b8153a228f4ebecdd39c299bfa12f10442-merged.mount: Deactivated successfully. Oct 5 05:55:37 localhost systemd[1]: var-lib-containers-storage-overlay-cdde2edaf765341107be03eca882dfa6d73c28177004f533b0a05cbbd0d74c1c-merged.mount: Deactivated successfully. Oct 5 05:55:37 localhost podman[310575]: 2025-10-05 09:55:37.144643014 +0000 UTC m=+0.098357642 container remove 31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_wing, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Oct 5 05:55:37 localhost systemd[1]: libpod-conmon-31ee90abda4c6388b1d787acadca03657fdcd41099ddb2fab6de38162d88eea8.scope: Deactivated successfully. Oct 5 05:55:37 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:37 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:37 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:37 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:37 localhost ceph-mon[302793]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:55:37 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:55:37 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:37 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:37 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:55:37 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:37 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:37 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:37 localhost podman[310652]: Oct 5 05:55:38 localhost podman[310652]: 2025-10-05 09:55:38.008953934 +0000 UTC m=+0.076892874 container create 68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, build-date=2025-09-24T08:57:55, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7) Oct 5 05:55:38 localhost systemd[1]: Started libpod-conmon-68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766.scope. Oct 5 05:55:38 localhost systemd[1]: Started libcrun container. Oct 5 05:55:38 localhost podman[310652]: 2025-10-05 09:55:38.072145618 +0000 UTC m=+0.140084558 container init 68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, ceph=True) Oct 5 05:55:38 localhost podman[310652]: 2025-10-05 09:55:37.975613015 +0000 UTC m=+0.043551985 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:38 localhost podman[310652]: 2025-10-05 09:55:38.083039901 +0000 UTC m=+0.150978841 container start 68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-type=git) Oct 5 05:55:38 localhost podman[310652]: 2025-10-05 09:55:38.083332699 +0000 UTC m=+0.151271679 container attach 68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.openshift.expose-services=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:55:38 localhost blissful_carver[310665]: 167 167 Oct 5 05:55:38 localhost systemd[1]: libpod-68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766.scope: Deactivated successfully. Oct 5 05:55:38 localhost podman[310652]: 2025-10-05 09:55:38.087374178 +0000 UTC m=+0.155313148 container died 68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc.) Oct 5 05:55:38 localhost podman[310670]: 2025-10-05 09:55:38.182250306 +0000 UTC m=+0.081923720 container remove 68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Oct 5 05:55:38 localhost systemd[1]: libpod-conmon-68c984e147e87b02b16b417b8b77800ca2872e488203c48848ed43f9f47a8766.scope: Deactivated successfully. Oct 5 05:55:38 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:38 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:38 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:38 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:38 localhost ceph-mon[302793]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:55:38 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:55:38 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:38 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:38 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:55:38 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:38 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:39 localhost podman[310747]: Oct 5 05:55:39 localhost podman[310747]: 2025-10-05 09:55:39.021475648 +0000 UTC m=+0.077527810 container create eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_thompson, version=7, release=553, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Oct 5 05:55:39 localhost systemd[1]: Started libpod-conmon-eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3.scope. Oct 5 05:55:39 localhost systemd[1]: var-lib-containers-storage-overlay-f9330e3f8cd242137de86f7b4c0b5765078b955223e4b3cddcfedb747d06cfbd-merged.mount: Deactivated successfully. Oct 5 05:55:39 localhost systemd[1]: Started libcrun container. Oct 5 05:55:39 localhost podman[310747]: 2025-10-05 09:55:39.087051107 +0000 UTC m=+0.143103269 container init eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_thompson, release=553, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 05:55:39 localhost podman[310747]: 2025-10-05 09:55:38.988659284 +0000 UTC m=+0.044711496 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:39 localhost systemd[1]: tmp-crun.Jwp8U8.mount: Deactivated successfully. Oct 5 05:55:39 localhost podman[310747]: 2025-10-05 09:55:39.098891406 +0000 UTC m=+0.154943568 container start eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_thompson, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:55:39 localhost podman[310747]: 2025-10-05 09:55:39.099270526 +0000 UTC m=+0.155322708 container attach eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_thompson, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:55:39 localhost quirky_thompson[310761]: 167 167 Oct 5 05:55:39 localhost systemd[1]: libpod-eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3.scope: Deactivated successfully. Oct 5 05:55:39 localhost podman[310747]: 2025-10-05 09:55:39.102307947 +0000 UTC m=+0.158360129 container died eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_thompson, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, ceph=True, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:55:39 localhost podman[310766]: 2025-10-05 09:55:39.197333479 +0000 UTC m=+0.082054003 container remove eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_thompson, io.openshift.expose-services=, release=553, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, io.buildah.version=1.33.12, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Oct 5 05:55:39 localhost systemd[1]: libpod-conmon-eb039a65ed0f4ee25b4dc02c5e9c2fba77b4b4e4e9272dee65652d88075136f3.scope: Deactivated successfully. Oct 5 05:55:39 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:39 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:39 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:39 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:39 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:55:39 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:55:39 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:39 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:39 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:55:39 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:39 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:39 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:39 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:39 localhost podman[310835]: Oct 5 05:55:39 localhost podman[310835]: 2025-10-05 09:55:39.888361298 +0000 UTC m=+0.074279594 container create 6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_lalande, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, release=553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Oct 5 05:55:39 localhost systemd[1]: Started libpod-conmon-6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a.scope. Oct 5 05:55:39 localhost systemd[1]: Started libcrun container. Oct 5 05:55:39 localhost podman[310835]: 2025-10-05 09:55:39.948371795 +0000 UTC m=+0.134290091 container init 6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_lalande, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64, distribution-scope=public) Oct 5 05:55:39 localhost podman[310835]: 2025-10-05 09:55:39.957501522 +0000 UTC m=+0.143419828 container start 6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_lalande, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-type=git) Oct 5 05:55:39 localhost podman[310835]: 2025-10-05 09:55:39.858620456 +0000 UTC m=+0.044538772 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:39 localhost podman[310835]: 2025-10-05 09:55:39.95781901 +0000 UTC m=+0.143737386 container attach 6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_lalande, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, architecture=x86_64, distribution-scope=public, name=rhceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7) Oct 5 05:55:39 localhost inspiring_lalande[310851]: 167 167 Oct 5 05:55:39 localhost systemd[1]: libpod-6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a.scope: Deactivated successfully. Oct 5 05:55:39 localhost podman[310835]: 2025-10-05 09:55:39.96004532 +0000 UTC m=+0.145963656 container died 6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_lalande, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:55:40 localhost podman[310856]: 2025-10-05 09:55:40.052106201 +0000 UTC m=+0.083310606 container remove 6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_lalande, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55) Oct 5 05:55:40 localhost systemd[1]: libpod-conmon-6e9807138ac9fa8d728029f00b31ca18e4462a2740144b5862c69ee609b4873a.scope: Deactivated successfully. Oct 5 05:55:40 localhost systemd[1]: var-lib-containers-storage-overlay-de7a17e81282fe09bdf4ea5bac7bceb9913936e4e1efa00e2fc83e1b78ab90da-merged.mount: Deactivated successfully. Oct 5 05:55:40 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:40 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:55:40 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:55:40 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:40 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:40 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:41 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:41 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:41 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:41 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:41 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 4ff5fac2-f2a8-42cf-aae7-2da4054e41ba (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:41 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 4ff5fac2-f2a8-42cf-aae7-2da4054e41ba (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:41 localhost ceph-mgr[301363]: [progress INFO root] Completed event 4ff5fac2-f2a8-42cf-aae7-2da4054e41ba (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:55:42 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:42 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:42 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:42 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:42 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:42 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:43 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:43 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:43 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:43 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:44 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:44 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:45 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:55:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:45 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:45 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:45 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:46 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:46 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 5 05:55:46 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 05:55:46 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:46 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:46 localhost openstack_network_exporter[250246]: ERROR 09:55:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:55:46 localhost openstack_network_exporter[250246]: ERROR 09:55:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:46 localhost openstack_network_exporter[250246]: ERROR 09:55:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:55:46 localhost openstack_network_exporter[250246]: ERROR 09:55:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:55:46 localhost openstack_network_exporter[250246]: Oct 5 05:55:46 localhost openstack_network_exporter[250246]: ERROR 09:55:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:55:46 localhost openstack_network_exporter[250246]: Oct 5 05:55:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.34399 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Oct 5 05:55:47 localhost ceph-mgr[301363]: [cephadm INFO root] Reconfig service osd.default_drive_group Oct 5 05:55:47 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Oct 5 05:55:47 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 288e21c2-783e-4100-b198-12b123183852 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:47 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 288e21c2-783e-4100-b198-12b123183852 (Updating node-proxy deployment (+4 -> 4)) Oct 5 05:55:47 localhost ceph-mgr[301363]: [progress INFO root] Completed event 288e21c2-783e-4100-b198-12b123183852 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Oct 5 05:55:47 localhost ceph-mon[302793]: Reconfig service osd.default_drive_group Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 565 MiB used, 41 GiB / 42 GiB avail Oct 5 05:55:47 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:47 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:47 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:47 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:47 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:47 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:48 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:55:48 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:55:48 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mon.np0005471151 172.18.0.107:0/2738498825; not ready for session (expect reconnect) Oct 5 05:55:48 localhost ceph-mgr[301363]: mgr finish mon failed to return metadata for mon.np0005471151: (2) No such file or directory Oct 5 05:55:48 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:48 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:55:48 localhost podman[310979]: 2025-10-05 09:55:48.929561564 +0000 UTC m=+0.092306829 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:55:48 localhost podman[310980]: 2025-10-05 09:55:48.979205933 +0000 UTC m=+0.139294697 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:55:48 localhost podman[310980]: 2025-10-05 09:55:48.989067299 +0000 UTC m=+0.149156083 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:55:49 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e84 e84: 6 total, 6 up, 6 in Oct 5 05:55:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:49.027+0000 7f708deb3640 -1 mgr handle_mgr_map I was active but no longer am Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471148"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471148"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471151.uyxcpj"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mds metadata", "who": "mds.np0005471151.uyxcpj"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471150.bsiqok"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mds metadata", "who": "mds.np0005471150.bsiqok"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471152.pozuqw"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mds metadata", "who": "mds.np0005471152.pozuqw"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon).mds e16 all = 0 Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon).mds e16 all = 0 Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon).mds e16 all = 0 Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471148.fayrer", "id": "np0005471148.fayrer"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mgr metadata", "who": "np0005471148.fayrer", "id": "np0005471148.fayrer"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471147.mwpyfl", "id": "np0005471147.mwpyfl"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mgr metadata", "who": "np0005471147.mwpyfl", "id": "np0005471147.mwpyfl"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471150.zwqxye", "id": "np0005471150.zwqxye"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mgr metadata", "who": "np0005471150.zwqxye", "id": "np0005471150.zwqxye"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471151.jecxod", "id": "np0005471151.jecxod"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mgr metadata", "who": "np0005471151.jecxod", "id": "np0005471151.jecxod"} : dispatch Oct 5 05:55:49 localhost podman[310979]: 2025-10-05 09:55:49.04366515 +0000 UTC m=+0.206410415 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 0} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 1} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 2} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 3} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 4} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 5} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata", "id": 5} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mds metadata"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mds metadata"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon).mds e16 all = 1 Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd metadata"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd metadata"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata"} : dispatch Oct 5 05:55:49 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:55:49 localhost systemd[1]: session-69.scope: Deactivated successfully. Oct 5 05:55:49 localhost systemd[1]: session-69.scope: Consumed 22.916s CPU time. Oct 5 05:55:49 localhost systemd-logind[760]: Session 69 logged out. Waiting for processes to exit. Oct 5 05:55:49 localhost systemd-logind[760]: Removed session 69. Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: ignoring --setuser ceph since I am not root Oct 5 05:55:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: ignoring --setgroup ceph since I am not root Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mgr[301363]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Oct 5 05:55:49 localhost ceph-mgr[301363]: pidfile_write: ignore empty --pid-file Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Loading python module 'alerts' Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/mirror_snapshot_schedule"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/mirror_snapshot_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Loading python module 'balancer' Oct 5 05:55:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:49.256+0000 7f4212f33140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/trash_purge_schedule"} v 0) Oct 5 05:55:49 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/trash_purge_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Loading python module 'cephadm' Oct 5 05:55:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:49.323+0000 7f4212f33140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Oct 5 05:55:49 localhost sshd[311044]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:55:49 localhost systemd-logind[760]: New session 70 of user ceph-admin. Oct 5 05:55:49 localhost systemd[1]: Started Session 70 of User ceph-admin. Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.17403 172.18.0.108:0/3451461818' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='client.? 172.18.0.200:0/3757018629' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: Activating manager daemon np0005471148.fayrer Oct 5 05:55:49 localhost ceph-mon[302793]: from='client.? 172.18.0.200:0/3757018629' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:55:49 localhost ceph-mon[302793]: Manager daemon np0005471148.fayrer is now available Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"}]': finished Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471147.localdomain.devices.0"}]': finished Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/mirror_snapshot_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/mirror_snapshot_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/trash_purge_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471148.fayrer/trash_purge_schedule"} : dispatch Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:49 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Loading python module 'crash' Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:55:49 localhost ceph-mgr[301363]: mgr[py] Loading python module 'dashboard' Oct 5 05:55:49 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:49.963+0000 7f4212f33140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Loading python module 'devicehealth' Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Loading python module 'diskprediction_local' Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:50.510+0000 7f4212f33140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost systemd[1]: tmp-crun.3hhanh.mount: Deactivated successfully. Oct 5 05:55:50 localhost podman[311160]: 2025-10-05 09:55:50.595983326 +0000 UTC m=+0.110400416 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:55:50 localhost ceph-mon[302793]: removing stray HostCache host record np0005471147.localdomain.devices.0 Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: from numpy import show_config as show_numpy_config Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:50.643+0000 7f4212f33140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Loading python module 'influx' Oct 5 05:55:50 localhost podman[311160]: 2025-10-05 09:55:50.689401404 +0000 UTC m=+0.203818494 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, vendor=Red Hat, Inc., release=553, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, build-date=2025-09-24T08:57:55, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Loading python module 'insights' Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:50.700+0000 7f4212f33140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Loading python module 'iostat' Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:55:50 localhost ceph-mgr[301363]: mgr[py] Loading python module 'k8sevents' Oct 5 05:55:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:50.813+0000 7f4212f33140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'localpool' Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'mds_autoscaler' Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'mirroring' Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'nfs' Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'orchestrator' Oct 5 05:55:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:51.553+0000 7f4212f33140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost systemd[1]: tmp-crun.ZTRo0B.mount: Deactivated successfully. Oct 5 05:55:51 localhost podman[311298]: 2025-10-05 09:55:51.579065178 +0000 UTC m=+0.099991447 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:55:51 localhost podman[311298]: 2025-10-05 09:55:51.592258194 +0000 UTC m=+0.113184463 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 05:55:51 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:55:51 localhost ceph-mon[302793]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Oct 5 05:55:51 localhost ceph-mon[302793]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Oct 5 05:55:51 localhost ceph-mon[302793]: Cluster is now healthy Oct 5 05:55:51 localhost ceph-mon[302793]: [05/Oct/2025:09:55:51] ENGINE Bus STARTING Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: [05/Oct/2025:09:55:51] ENGINE Serving on https://172.18.0.105:7150 Oct 5 05:55:51 localhost ceph-mon[302793]: [05/Oct/2025:09:55:51] ENGINE Client ('172.18.0.105', 40356) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: [05/Oct/2025:09:55:51] ENGINE Serving on http://172.18.0.105:8765 Oct 5 05:55:51 localhost ceph-mon[302793]: [05/Oct/2025:09:55:51] ENGINE Bus STARTED Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'osd_perf_query' Oct 5 05:55:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:51.697+0000 7f4212f33140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'osd_support' Oct 5 05:55:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:51.759+0000 7f4212f33140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'pg_autoscaler' Oct 5 05:55:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:51.814+0000 7f4212f33140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:51 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:51 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'progress' Oct 5 05:55:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:51.879+0000 7f4212f33140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:55:51 localhost ceph-mgr[301363]: mgr[py] Loading python module 'prometheus' Oct 5 05:55:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:51.942+0000 7f4212f33140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:52.235+0000 7f4212f33140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Loading python module 'rbd_support' Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Loading python module 'restful' Oct 5 05:55:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:52.315+0000 7f4212f33140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Loading python module 'rgw' Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-mgr[301363]: mgr[py] Loading python module 'rook' Oct 5 05:55:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:52.635+0000 7f4212f33140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:55:52 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:55:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'selftest' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.054+0000 7f4212f33140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'snap_schedule' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.112+0000 7f4212f33140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'stats' Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'status' Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'telegraf' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.299+0000 7f4212f33140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'telemetry' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.356+0000 7f4212f33140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'test_orchestrator' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.483+0000 7f4212f33140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'volumes' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.625+0000 7f4212f33140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:55:53 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:55:53 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:55:53 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:55:53 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:55:53 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:55:53 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Loading python module 'zabbix' Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.807+0000 7f4212f33140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:53 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:53 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:53 localhost ceph-mgr[301363]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:55:53.864+0000 7f4212f33140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Oct 5 05:55:53 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53b600 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Oct 5 05:55:53 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.105:6800/1913231220 Oct 5 05:55:53 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:54 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471152.kbhlus", "id": "np0005471152.kbhlus"} v 0) Oct 5 05:55:54 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mgr metadata", "who": "np0005471152.kbhlus", "id": "np0005471152.kbhlus"} : dispatch Oct 5 05:55:54 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:54 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:54 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:55 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:55 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:55 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:56 localhost podman[248157]: time="2025-10-05T09:55:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:55:56 localhost podman[248157]: @ - - [05/Oct/2025:09:55:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:55:56 localhost podman[248157]: @ - - [05/Oct/2025:09:55:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19290 "" "Go-http-client/1.1" Oct 5 05:55:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:55:56 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 5 05:55:56 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:56 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:55:56 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:55:56 localhost podman[312081]: 2025-10-05 09:55:56.267955158 +0000 UTC m=+0.086670687 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 05:55:56 localhost podman[312081]: 2025-10-05 09:55:56.30329423 +0000 UTC m=+0.122009759 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:55:56 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:55:56 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:55:56 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:56 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:56 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:55:56 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:57 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:57 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:58 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:58 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:55:58 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:55:58 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:58 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:55:59 localhost ceph-mon[302793]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:55:59 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:55:59 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:55:59 localhost podman[312151]: Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:59 localhost podman[312151]: 2025-10-05 09:55:59.911047646 +0000 UTC m=+0.080607653 container create fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:55:59 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:55:59 localhost systemd[1]: Started libpod-conmon-fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b.scope. Oct 5 05:55:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:55:59 localhost systemd[1]: Started libcrun container. Oct 5 05:55:59 localhost podman[312151]: 2025-10-05 09:55:59.878364335 +0000 UTC m=+0.047924392 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:55:59 localhost podman[312151]: 2025-10-05 09:55:59.996293474 +0000 UTC m=+0.165853481 container init fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:56:00 localhost podman[312151]: 2025-10-05 09:56:00.01395135 +0000 UTC m=+0.183511357 container start fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Oct 5 05:56:00 localhost podman[312151]: 2025-10-05 09:56:00.014231878 +0000 UTC m=+0.183791935 container attach fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, name=rhceph, release=553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True) Oct 5 05:56:00 localhost awesome_nightingale[312166]: 167 167 Oct 5 05:56:00 localhost systemd[1]: libpod-fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b.scope: Deactivated successfully. Oct 5 05:56:00 localhost podman[312151]: 2025-10-05 09:56:00.023350733 +0000 UTC m=+0.192910740 container died fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, GIT_CLEAN=True, release=553, GIT_BRANCH=main, name=rhceph, RELEASE=main, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:56:00 localhost podman[312167]: 2025-10-05 09:56:00.065410937 +0000 UTC m=+0.094273022 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:56:00 localhost podman[312167]: 2025-10-05 09:56:00.079113776 +0000 UTC m=+0.107975871 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:56:00 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:56:00 localhost podman[312188]: 2025-10-05 09:56:00.140544983 +0000 UTC m=+0.091324953 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 05:56:00 localhost podman[312182]: 2025-10-05 09:56:00.179465272 +0000 UTC m=+0.147704913 container remove fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, ceph=True, GIT_BRANCH=main, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:56:00 localhost systemd[1]: libpod-conmon-fe9852f613578b04df50e8ed2b9ca2ce2f27520a2aee35a0b694195daf8c1d0b.scope: Deactivated successfully. Oct 5 05:56:00 localhost podman[312188]: 2025-10-05 09:56:00.224240889 +0000 UTC m=+0.175020839 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 05:56:00 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:56:00 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:00 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:56:00 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:00 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:00 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:00 localhost systemd[1]: tmp-crun.mQ7wDm.mount: Deactivated successfully. Oct 5 05:56:00 localhost systemd[1]: var-lib-containers-storage-overlay-724976abdbd5d18ebd3512bdccf5aac9b94ff440df4254d8e56bbc0db50ffc1c-merged.mount: Deactivated successfully. Oct 5 05:56:01 localhost podman[312287]: Oct 5 05:56:01 localhost podman[312287]: 2025-10-05 09:56:01.080201594 +0000 UTC m=+0.076489274 container create a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_swirles, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Oct 5 05:56:01 localhost systemd[1]: Started libpod-conmon-a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8.scope. Oct 5 05:56:01 localhost systemd[1]: Started libcrun container. Oct 5 05:56:01 localhost podman[312287]: 2025-10-05 09:56:01.049102986 +0000 UTC m=+0.045390706 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:01 localhost podman[312287]: 2025-10-05 09:56:01.157441106 +0000 UTC m=+0.153728806 container init a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_swirles, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:56:01 localhost podman[312287]: 2025-10-05 09:56:01.173624742 +0000 UTC m=+0.169912412 container start a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_swirles, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=) Oct 5 05:56:01 localhost podman[312287]: 2025-10-05 09:56:01.173955241 +0000 UTC m=+0.170242891 container attach a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_swirles, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, name=rhceph, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Oct 5 05:56:01 localhost brave_swirles[312302]: 167 167 Oct 5 05:56:01 localhost systemd[1]: libpod-a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8.scope: Deactivated successfully. Oct 5 05:56:01 localhost podman[312287]: 2025-10-05 09:56:01.181114124 +0000 UTC m=+0.177401814 container died a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_swirles, release=553, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Oct 5 05:56:01 localhost podman[312307]: 2025-10-05 09:56:01.281811799 +0000 UTC m=+0.089563726 container remove a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_swirles, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, build-date=2025-09-24T08:57:55, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Oct 5 05:56:01 localhost systemd[1]: libpod-conmon-a66fe458da2e04500ce1c85d3cb1295d09421cc72a7cf24535bb9d6ac5de24f8.scope: Deactivated successfully. Oct 5 05:56:01 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:01 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:56:01 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:01 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:01 localhost systemd[1]: tmp-crun.GBtEUp.mount: Deactivated successfully. Oct 5 05:56:01 localhost systemd[1]: var-lib-containers-storage-overlay-ae20bcb5b324ac2ece38cf81a94d6c47ab83c6efb48f2a42d82a6216ddfefbaf-merged.mount: Deactivated successfully. Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:01 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:02 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:02 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:02 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:02 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:03 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:03 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:56:04 localhost nova_compute[297130]: 2025-10-05 09:56:04.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:04 localhost ceph-mon[302793]: Saving service mon spec with placement label:mon Oct 5 05:56:04 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:04 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:04 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:04 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:04 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:04 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:04 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:04 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:05 localhost nova_compute[297130]: 2025-10-05 09:56:05.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:05 localhost nova_compute[297130]: 2025-10-05 09:56:05.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:05 localhost nova_compute[297130]: 2025-10-05 09:56:05.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:56:05 localhost nova_compute[297130]: 2025-10-05 09:56:05.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:56:05 localhost nova_compute[297130]: 2025-10-05 09:56:05.342 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:56:05 localhost ceph-mon[302793]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:56:05 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:56:05 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:05 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:05 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:05 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.319 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.320 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.320 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.320 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.321 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:56:06 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:06 localhost podman[312452]: Oct 5 05:56:06 localhost podman[312421]: 2025-10-05 09:56:06.571846623 +0000 UTC m=+0.115933626 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:56:06 localhost podman[312452]: 2025-10-05 09:56:06.577062894 +0000 UTC m=+0.091582531 container create a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_herschel, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, release=553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, ceph=True, version=7, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:06 localhost ceph-mon[302793]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:56:06 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:56:06 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:06 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:06 localhost ceph-mon[302793]: from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.611373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166611435, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2802, "num_deletes": 256, "total_data_size": 8503838, "memory_usage": 8745984, "flush_reason": "Manual Compaction"} Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Oct 5 05:56:06 localhost podman[312419]: 2025-10-05 09:56:06.634339377 +0000 UTC m=+0.178519353 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 05:56:06 localhost podman[312452]: 2025-10-05 09:56:06.548696829 +0000 UTC m=+0.063216486 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166652346, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 5067169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16932, "largest_seqno": 19729, "table_properties": {"data_size": 5055540, "index_size": 7238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 29742, "raw_average_key_size": 22, "raw_value_size": 5030205, "raw_average_value_size": 3822, "num_data_blocks": 314, "num_entries": 1316, "num_filter_entries": 1316, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658102, "oldest_key_time": 1759658102, "file_creation_time": 1759658166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 41037 microseconds, and 7512 cpu microseconds. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.652408) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 5067169 bytes OK Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.652437) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.654410) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.654432) EVENT_LOG_v1 {"time_micros": 1759658166654426, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.654456) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 8490280, prev total WAL file size 8490280, number of live WAL files 2. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.656205) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(4948KB)], [24(13MB)] Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166656251, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19439270, "oldest_snapshot_seqno": -1} Oct 5 05:56:06 localhost podman[312419]: 2025-10-05 09:56:06.678110547 +0000 UTC m=+0.222290523 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:56:06 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11269 keys, 17537226 bytes, temperature: kUnknown Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166777401, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 17537226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17469354, "index_size": 38587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299691, "raw_average_key_size": 26, "raw_value_size": 17273873, "raw_average_value_size": 1532, "num_data_blocks": 1490, "num_entries": 11269, "num_filter_entries": 11269, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759658166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.777763) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 17537226 bytes Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.779512) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.3 rd, 144.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.8, 13.7 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(7.3) write-amplify(3.5) OK, records in: 11818, records dropped: 549 output_compression: NoCompression Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.779542) EVENT_LOG_v1 {"time_micros": 1759658166779528, "job": 12, "event": "compaction_finished", "compaction_time_micros": 121277, "compaction_time_cpu_micros": 36605, "output_level": 6, "num_output_files": 1, "total_output_size": 17537226, "num_input_records": 11818, "num_output_records": 11269, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166780326, "job": 12, "event": "table_file_deletion", "file_number": 26} Oct 5 05:56:06 localhost systemd[1]: Started libpod-conmon-a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8.scope. Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658166784558, "job": 12, "event": "table_file_deletion", "file_number": 24} Oct 5 05:56:06 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.656088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.784679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.784713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.784718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.784723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1966637431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:56:06 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:06.784729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:06 localhost systemd[1]: Started libcrun container. Oct 5 05:56:06 localhost podman[312421]: 2025-10-05 09:56:06.801142564 +0000 UTC m=+0.345229567 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 05:56:06 localhost nova_compute[297130]: 2025-10-05 09:56:06.803 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:56:06 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:56:06 localhost podman[312452]: 2025-10-05 09:56:06.817082104 +0000 UTC m=+0.331601741 container init a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_herschel, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Oct 5 05:56:06 localhost podman[312452]: 2025-10-05 09:56:06.83475812 +0000 UTC m=+0.349277767 container start a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_herschel, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, release=553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, distribution-scope=public, GIT_CLEAN=True) Oct 5 05:56:06 localhost podman[312452]: 2025-10-05 09:56:06.835073669 +0000 UTC m=+0.349593326 container attach a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_herschel, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:06 localhost systemd[1]: libpod-a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8.scope: Deactivated successfully. Oct 5 05:56:06 localhost distracted_herschel[312500]: 167 167 Oct 5 05:56:06 localhost podman[312452]: 2025-10-05 09:56:06.839999642 +0000 UTC m=+0.354519349 container died a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_herschel, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:56:06 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:06 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:06 localhost podman[312506]: 2025-10-05 09:56:06.957028326 +0000 UTC m=+0.106002298 container remove a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_herschel, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, maintainer=Guillaume Abrioux , release=553, GIT_CLEAN=True, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:06 localhost systemd[1]: libpod-conmon-a092ea7ef8b7d31518e884c4dbb841196a50f2654f98e8a9deed4a0a9e04a5e8.scope: Deactivated successfully. Oct 5 05:56:07 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:07 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.074 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.077 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11900MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.078 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.078 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.154 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.155 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.182 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:56:07 localhost systemd[1]: var-lib-containers-storage-overlay-3f68a95d581196d27290296cc0545522efc44fb0c5ca036bc37b0fe11ca6a92b-merged.mount: Deactivated successfully. Oct 5 05:56:07 localhost ceph-mon[302793]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:56:07 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:56:07 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:07 localhost ceph-mon[302793]: from='mgr.24103 ' entity='mgr.np0005471148.fayrer' Oct 5 05:56:07 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:56:07 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1125158804' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.644 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.653 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.676 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.679 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:56:07 localhost nova_compute[297130]: 2025-10-05 09:56:07.680 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:56:07 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:07 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:07 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:08 localhost nova_compute[297130]: 2025-10-05 09:56:08.680 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:08 localhost nova_compute[297130]: 2025-10-05 09:56:08.681 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:08 localhost nova_compute[297130]: 2025-10-05 09:56:08.681 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:08 localhost nova_compute[297130]: 2025-10-05 09:56:08.682 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:08 localhost nova_compute[297130]: 2025-10-05 09:56:08.682 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:56:08 localhost nova_compute[297130]: 2025-10-05 09:56:08.683 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:56:08 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:08 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:09 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:09 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:09 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Oct 5 05:56:10 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:10 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:11 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53af20 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471148"} v 0) Oct 5 05:56:11 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471148"} : dispatch Oct 5 05:56:11 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:56:11 localhost ceph-mon[302793]: paxos.1).electionLogic(44) init, last seen epoch 44 Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:56:11 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:11 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:56:11 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:56:11 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:11 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:12 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:12 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:13 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:13 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:14 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:14 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:15 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:15 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:16 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:56:16 localhost ceph-mon[302793]: paxos.1).electionLogic(46) init, last seen epoch 46 Oct 5 05:56:16 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 handle_timecheck drop unexpected msg Oct 5 05:56:16 localhost ceph-mon[302793]: mon.np0005471152@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:16 localhost openstack_network_exporter[250246]: ERROR 09:56:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:16 localhost openstack_network_exporter[250246]: ERROR 09:56:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:56:16 localhost openstack_network_exporter[250246]: ERROR 09:56:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:16 localhost openstack_network_exporter[250246]: ERROR 09:56:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:56:16 localhost openstack_network_exporter[250246]: Oct 5 05:56:16 localhost openstack_network_exporter[250246]: ERROR 09:56:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:56:16 localhost openstack_network_exporter[250246]: Oct 5 05:56:16 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:16 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.24103 172.18.0.105:0/4141398109' entity='mgr.np0005471148.fayrer' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:56:17 localhost ceph-mon[302793]: Health check failed: 1/4 mons down, quorum np0005471148,np0005471152,np0005471150 (MON_DOWN) Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:56:17 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471148 calling monitor election Oct 5 05:56:17 localhost ceph-mon[302793]: mon.np0005471148 is new leader, mons np0005471148,np0005471152,np0005471150,np0005471151 in quorum (ranks 0,1,2,3) Oct 5 05:56:17 localhost ceph-mon[302793]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005471148,np0005471152,np0005471150) Oct 5 05:56:17 localhost ceph-mon[302793]: Cluster is now healthy Oct 5 05:56:17 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:56:18 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e85 e85: 6 total, 6 up, 6 in Oct 5 05:56:18 localhost systemd[1]: session-70.scope: Deactivated successfully. Oct 5 05:56:18 localhost systemd[1]: session-70.scope: Consumed 8.431s CPU time. Oct 5 05:56:18 localhost systemd-logind[760]: Session 70 logged out. Waiting for processes to exit. Oct 5 05:56:18 localhost systemd-logind[760]: Removed session 70. Oct 5 05:56:18 localhost ceph-mon[302793]: from='client.? 172.18.0.200:0/4001180372' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:56:18 localhost ceph-mon[302793]: Activating manager daemon np0005471150.zwqxye Oct 5 05:56:18 localhost ceph-mon[302793]: from='client.? 172.18.0.200:0/4001180372' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:56:18 localhost ceph-mon[302793]: Manager daemon np0005471150.zwqxye is now available Oct 5 05:56:18 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/mirror_snapshot_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/mirror_snapshot_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/trash_purge_schedule"} : dispatch Oct 5 05:56:18 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471150.zwqxye/trash_purge_schedule"} : dispatch Oct 5 05:56:18 localhost sshd[312545]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:56:18 localhost systemd-logind[760]: New session 71 of user ceph-admin. Oct 5 05:56:18 localhost systemd[1]: Started Session 71 of User ceph-admin. Oct 5 05:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:56:19 localhost podman[312585]: 2025-10-05 09:56:19.190911142 +0000 UTC m=+0.099824482 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:56:19 localhost podman[312585]: 2025-10-05 09:56:19.199818552 +0000 UTC m=+0.108731952 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 05:56:19 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:56:19 localhost podman[312586]: 2025-10-05 09:56:19.28990716 +0000 UTC m=+0.198382039 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:56:19 localhost podman[312586]: 2025-10-05 09:56:19.300567268 +0000 UTC m=+0.209042157 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:56:19 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:56:19 localhost podman[312698]: 2025-10-05 09:56:19.943743746 +0000 UTC m=+0.102541975 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, version=7, GIT_CLEAN=True, release=553, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Oct 5 05:56:20 localhost podman[312698]: 2025-10-05 09:56:20.043707701 +0000 UTC m=+0.202505980 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:56:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:56:20.392 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:56:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:56:20.394 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:56:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:56:20.394 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:56:21 localhost ceph-mon[302793]: [05/Oct/2025:09:56:19] ENGINE Bus STARTING Oct 5 05:56:21 localhost ceph-mon[302793]: [05/Oct/2025:09:56:19] ENGINE Serving on http://172.18.0.106:8765 Oct 5 05:56:21 localhost ceph-mon[302793]: [05/Oct/2025:09:56:20] ENGINE Serving on https://172.18.0.106:7150 Oct 5 05:56:21 localhost ceph-mon[302793]: [05/Oct/2025:09:56:20] ENGINE Bus STARTED Oct 5 05:56:21 localhost ceph-mon[302793]: [05/Oct/2025:09:56:20] ENGINE Client ('172.18.0.106', 51652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:21 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:56:21 localhost podman[312907]: 2025-10-05 09:56:21.911679926 +0000 UTC m=+0.095235288 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible) Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.918141) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658181918176, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 679, "num_deletes": 256, "total_data_size": 2229414, "memory_usage": 2302528, "flush_reason": "Manual Compaction"} Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Oct 5 05:56:21 localhost podman[312907]: 2025-10-05 09:56:21.924849792 +0000 UTC m=+0.108405164 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, release=1755695350) Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658181930620, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1461678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19734, "largest_seqno": 20408, "table_properties": {"data_size": 1458023, "index_size": 1382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9457, "raw_average_key_size": 20, "raw_value_size": 1449919, "raw_average_value_size": 3078, "num_data_blocks": 54, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658166, "oldest_key_time": 1759658166, "file_creation_time": 1759658181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12528 microseconds, and 3709 cpu microseconds. Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.930668) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1461678 bytes OK Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.930690) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933050) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933071) EVENT_LOG_v1 {"time_micros": 1759658181933063, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2225389, prev total WAL file size 2225389, number of live WAL files 2. Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933823) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323632' seq:72057594037927935, type:22 .. '6B760031353138' seq:0, type:0; will stop at (end) Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1427KB)], [27(16MB)] Oct 5 05:56:21 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658181933855, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18998904, "oldest_snapshot_seqno": -1} Oct 5 05:56:21 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11190 keys, 17845064 bytes, temperature: kUnknown Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658182062272, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17845064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17778685, "index_size": 37286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28037, "raw_key_size": 300092, "raw_average_key_size": 26, "raw_value_size": 17585316, "raw_average_value_size": 1571, "num_data_blocks": 1416, "num_entries": 11190, "num_filter_entries": 11190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759658181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.062746) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17845064 bytes Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.064587) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 138.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 16.7 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(25.2) write-amplify(12.2) OK, records in: 11740, records dropped: 550 output_compression: NoCompression Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.064617) EVENT_LOG_v1 {"time_micros": 1759658182064603, "job": 14, "event": "compaction_finished", "compaction_time_micros": 128550, "compaction_time_cpu_micros": 48089, "output_level": 6, "num_output_files": 1, "total_output_size": 17845064, "num_input_records": 11740, "num_output_records": 11190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658182065000, "job": 14, "event": "table_file_deletion", "file_number": 29} Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658182067475, "job": 14, "event": "table_file_deletion", "file_number": 27} Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:21.933719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:56:22.067552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd/host:np0005471148", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:56:22 localhost ceph-mon[302793]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:56:22 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:22 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:22 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:24 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:24 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:25 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:26 localhost podman[248157]: time="2025-10-05T09:56:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:56:26 localhost podman[248157]: @ - - [05/Oct/2025:09:56:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:56:26 localhost podman[248157]: @ - - [05/Oct/2025:09:56:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19307 "" "Go-http-client/1.1" Oct 5 05:56:26 localhost ceph-mon[302793]: Reconfiguring mon.np0005471148 (monmap changed)... Oct 5 05:56:26 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:26 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471148 on np0005471148.localdomain Oct 5 05:56:26 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:56:26 localhost podman[313621]: 2025-10-05 09:56:26.896460492 +0000 UTC m=+0.068023224 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 05:56:26 localhost podman[313621]: 2025-10-05 09:56:26.926448251 +0000 UTC m=+0.098010923 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:56:26 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471148.fayrer (monmap changed)... Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471148.fayrer", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471148.fayrer on np0005471148.localdomain Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:27 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471148.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:28 localhost ceph-mon[302793]: Reconfiguring crash.np0005471148 (monmap changed)... Oct 5 05:56:28 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471148 on np0005471148.localdomain Oct 5 05:56:28 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:28 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:28 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:28 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:29 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:56:29 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:56:29 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:29 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:29 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:29 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:56:30 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:56:30 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:56:30 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:30 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:30 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:56:30 localhost systemd[1]: tmp-crun.nMUxcX.mount: Deactivated successfully. Oct 5 05:56:30 localhost podman[313643]: 2025-10-05 09:56:30.919858673 +0000 UTC m=+0.087711535 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:56:30 localhost podman[313643]: 2025-10-05 09:56:30.953999404 +0000 UTC m=+0.121852276 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:56:30 localhost podman[313642]: 2025-10-05 09:56:30.963731126 +0000 UTC m=+0.134606650 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 05:56:30 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:56:30 localhost podman[313642]: 2025-10-05 09:56:30.978103073 +0000 UTC m=+0.148978637 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:56:30 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:56:31 localhost ceph-mon[302793]: mon.np0005471152@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:31 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:56:31 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:56:31 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:31 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:31 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:31 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:32 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:56:32 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:56:32 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:32 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:32 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:32 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:33 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:56:33 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:56:33 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:33 localhost ceph-mon[302793]: from='mgr.26993 ' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:33 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:34 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53b1e0 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Oct 5 05:56:34 localhost ceph-mon[302793]: mon.np0005471152@1(peon) e13 my rank is now 0 (was 1) Oct 5 05:56:34 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 5 05:56:34 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Oct 5 05:56:34 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53af20 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Oct 5 05:56:34 localhost ceph-mon[302793]: mon.np0005471152@0(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:56:34 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:56:34 localhost ceph-mon[302793]: mon.np0005471152@0(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:56:34 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:56:34 localhost ceph-mon[302793]: mon.np0005471152@0(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:56:34 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:56:34 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:56:34 localhost ceph-mon[302793]: paxos.0).electionLogic(48) init, last seen epoch 48 Oct 5 05:56:34 localhost ceph-mon[302793]: mon.np0005471152@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:34 localhost ceph-mon[302793]: mon.np0005471152@0(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:56:36 localhost podman[313684]: 2025-10-05 09:56:36.914134143 +0000 UTC m=+0.081587491 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3) Oct 5 05:56:36 localhost podman[313684]: 2025-10-05 09:56:36.945804066 +0000 UTC m=+0.113257434 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 05:56:36 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:56:37 localhost podman[313685]: 2025-10-05 09:56:37.018756894 +0000 UTC m=+0.181938166 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:56:37 localhost podman[313685]: 2025-10-05 09:56:37.124812892 +0000 UTC m=+0.287994114 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller) Oct 5 05:56:37 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:56:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:56:39 localhost ceph-mds[300011]: mds.beacon.mds.np0005471152.pozuqw missed beacon ack from the monitors Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 is new leader, mons np0005471152,np0005471150 in quorum (ranks 0,1) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : monmap epoch 13 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : last_changed 2025-10-05T09:56:34.541130+0000 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : created 2025-10-05T07:42:01.637504+0000 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005471152 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005471150 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005471151 Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005471152.pozuqw=up:active} 2 up:standby Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : mgrmap e32: np0005471150.zwqxye(active, since 21s), standbys: np0005471151.jecxod, np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005471152,np0005471150 (MON_DOWN) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum np0005471152,np0005471150 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005471152,np0005471150 Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(cluster) log [WRN] : mon.np0005471151 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:39 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:39 localhost ceph-mon[302793]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:56:39 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:56:39 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:56:39 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:39 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:56:39 localhost ceph-mon[302793]: Remove daemons mon.np0005471148 Oct 5 05:56:39 localhost ceph-mon[302793]: Safe to remove mon.np0005471148: new quorum should be ['np0005471152', 'np0005471150', 'np0005471151'] (from ['np0005471152', 'np0005471150', 'np0005471151']) Oct 5 05:56:39 localhost ceph-mon[302793]: Removing monitor np0005471148 from monmap... Oct 5 05:56:39 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon rm", "name": "np0005471148"} : dispatch Oct 5 05:56:39 localhost ceph-mon[302793]: Removing daemon mon.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471152 is new leader, mons np0005471152,np0005471150 in quorum (ranks 0,1) Oct 5 05:56:39 localhost ceph-mon[302793]: Health check failed: 1/3 mons down, quorum np0005471152,np0005471150 (MON_DOWN) Oct 5 05:56:39 localhost ceph-mon[302793]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005471152,np0005471150 Oct 5 05:56:39 localhost ceph-mon[302793]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005471152,np0005471150 Oct 5 05:56:39 localhost ceph-mon[302793]: mon.np0005471151 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Oct 5 05:56:39 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:39 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:39 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:56:40 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:40 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:40 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:56:40 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:40 localhost ceph-mon[302793]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:56:40 localhost ceph-mon[302793]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:56:40 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:40 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:40 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:56:40 localhost ceph-mon[302793]: paxos.0).electionLogic(51) init, last seen epoch 51, mid-election, bumping Oct 5 05:56:40 localhost ceph-mon[302793]: mon.np0005471152@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : mon.np0005471152 is new leader, mons np0005471152,np0005471150,np0005471151 in quorum (ranks 0,1,2) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : monmap epoch 13 Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : last_changed 2025-10-05T09:56:34.541130+0000 Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : created 2025-10-05T07:42:01.637504+0000 Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : election_strategy: 1 Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005471152 Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005471150 Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005471151 Oct 5 05:56:40 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005471152.pozuqw=up:active} 2 up:standby Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [DBG] : mgrmap e32: np0005471150.zwqxye(active, since 22s), standbys: np0005471151.jecxod, np0005471152.kbhlus, np0005471148.fayrer Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005471152,np0005471150) Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : Cluster is now healthy Oct 5 05:56:40 localhost ceph-mon[302793]: log_channel(cluster) log [INF] : overall HEALTH_OK Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 5 05:56:41 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:41 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:41 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:56:41 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:41 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471151 calling monitor election Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152 calling monitor election Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471150 calling monitor election Oct 5 05:56:41 localhost ceph-mon[302793]: mon.np0005471152 is new leader, mons np0005471152,np0005471150,np0005471151 in quorum (ranks 0,1,2) Oct 5 05:56:41 localhost ceph-mon[302793]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005471152,np0005471150) Oct 5 05:56:41 localhost ceph-mon[302793]: Cluster is now healthy Oct 5 05:56:41 localhost ceph-mon[302793]: overall HEALTH_OK Oct 5 05:56:41 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:41 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:42 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:42 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:42 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:56:42 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:42 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:56:42 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:56:42 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:42 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:42 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 5 05:56:42 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[302793]: Removed label mon from host np0005471148.localdomain Oct 5 05:56:42 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:56:42 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:56:42 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:42 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:42 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:43 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:43 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:43 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:43 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:43 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:43 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:43 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:43 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:56:43 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:56:43 localhost ceph-mon[302793]: Removed label mgr from host np0005471148.localdomain Oct 5 05:56:43 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:43 localhost ceph-mon[302793]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:56:43 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:43 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:56:43 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 5 05:56:43 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:44 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:44 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:56:44 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:44 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:44 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:44 localhost podman[313781]: Oct 5 05:56:44 localhost podman[313781]: 2025-10-05 09:56:44.831340738 +0000 UTC m=+0.079333329 container create ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_ride, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7) Oct 5 05:56:44 localhost systemd[1]: Started libpod-conmon-ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b.scope. Oct 5 05:56:44 localhost systemd[1]: Started libcrun container. Oct 5 05:56:44 localhost podman[313781]: 2025-10-05 09:56:44.800020684 +0000 UTC m=+0.048013265 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:44 localhost podman[313781]: 2025-10-05 09:56:44.906853464 +0000 UTC m=+0.154846045 container init ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_ride, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public) Oct 5 05:56:44 localhost tender_ride[313796]: 167 167 Oct 5 05:56:44 localhost systemd[1]: libpod-ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b.scope: Deactivated successfully. Oct 5 05:56:44 localhost podman[313781]: 2025-10-05 09:56:44.921439978 +0000 UTC m=+0.169432569 container start ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_ride, com.redhat.component=rhceph-container, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main) Oct 5 05:56:44 localhost podman[313781]: 2025-10-05 09:56:44.921842238 +0000 UTC m=+0.169834839 container attach ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_ride, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux , distribution-scope=public) Oct 5 05:56:44 localhost podman[313781]: 2025-10-05 09:56:44.925849676 +0000 UTC m=+0.173842277 container died ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_ride, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:44 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[302793]: Removed label _admin from host np0005471148.localdomain Oct 5 05:56:44 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:44 localhost ceph-mon[302793]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:56:44 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:56:44 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:56:45 localhost podman[313801]: 2025-10-05 09:56:45.019974954 +0000 UTC m=+0.088224150 container remove ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_ride, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Oct 5 05:56:45 localhost systemd[1]: libpod-conmon-ee70e3c605ea23ca9fdf6e63478454d150f63d169772f2affe2df0b9f2def12b.scope: Deactivated successfully. Oct 5 05:56:45 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:45 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:45 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:45 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:45 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Oct 5 05:56:45 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:56:45 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:45 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:45 localhost podman[313871]: Oct 5 05:56:45 localhost podman[313871]: 2025-10-05 09:56:45.747223768 +0000 UTC m=+0.093244604 container create 7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_buck, release=553, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True) Oct 5 05:56:45 localhost systemd[1]: Started libpod-conmon-7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5.scope. Oct 5 05:56:45 localhost systemd[1]: Started libcrun container. Oct 5 05:56:45 localhost podman[313871]: 2025-10-05 09:56:45.819241 +0000 UTC m=+0.165261826 container init 7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_buck, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, version=7) Oct 5 05:56:45 localhost podman[313871]: 2025-10-05 09:56:45.720005685 +0000 UTC m=+0.066026491 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:45 localhost podman[313871]: 2025-10-05 09:56:45.828700134 +0000 UTC m=+0.174720970 container start 7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_buck, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc.) Oct 5 05:56:45 localhost podman[313871]: 2025-10-05 09:56:45.828952831 +0000 UTC m=+0.174973687 container attach 7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_buck, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Oct 5 05:56:45 localhost cool_buck[313886]: 167 167 Oct 5 05:56:45 localhost systemd[1]: libpod-7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5.scope: Deactivated successfully. Oct 5 05:56:45 localhost podman[313871]: 2025-10-05 09:56:45.831939313 +0000 UTC m=+0.177960179 container died 7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_buck, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_BRANCH=main, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Oct 5 05:56:45 localhost systemd[1]: var-lib-containers-storage-overlay-a616bee54b27671a1d3869a7114668581176982781e86f7b41d3af5e1c07d326-merged.mount: Deactivated successfully. Oct 5 05:56:45 localhost systemd[1]: var-lib-containers-storage-overlay-ec4e7de1d4890d86c517726644f7fbaf823eec1f12d17f908d3cda8f7ceac9b8-merged.mount: Deactivated successfully. Oct 5 05:56:45 localhost podman[313891]: 2025-10-05 09:56:45.929497432 +0000 UTC m=+0.087663084 container remove 7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_buck, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, version=7, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:56:45 localhost systemd[1]: libpod-conmon-7d94bf0216227665fc118712ea62d0f6998c9e546bc7e45b0a643817fc54aea5.scope: Deactivated successfully. Oct 5 05:56:46 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:46 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:46 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Oct 5 05:56:46 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:56:46 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:46 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:46 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[302793]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:56:46 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:56:46 localhost ceph-mon[302793]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:56:46 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:46 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:46 localhost podman[313967]: Oct 5 05:56:46 localhost podman[313967]: 2025-10-05 09:56:46.712589152 +0000 UTC m=+0.074584712 container create ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hofstadter, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:56:46 localhost openstack_network_exporter[250246]: ERROR 09:56:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:56:46 localhost openstack_network_exporter[250246]: ERROR 09:56:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:46 localhost openstack_network_exporter[250246]: ERROR 09:56:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:56:46 localhost openstack_network_exporter[250246]: ERROR 09:56:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:56:46 localhost openstack_network_exporter[250246]: Oct 5 05:56:46 localhost openstack_network_exporter[250246]: ERROR 09:56:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:56:46 localhost openstack_network_exporter[250246]: Oct 5 05:56:46 localhost systemd[1]: Started libpod-conmon-ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd.scope. Oct 5 05:56:46 localhost systemd[1]: Started libcrun container. Oct 5 05:56:46 localhost podman[313967]: 2025-10-05 09:56:46.778050797 +0000 UTC m=+0.140046357 container init ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hofstadter, CEPH_POINT_RELEASE=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, version=7, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True) Oct 5 05:56:46 localhost podman[313967]: 2025-10-05 09:56:46.681951986 +0000 UTC m=+0.043947566 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:46 localhost podman[313967]: 2025-10-05 09:56:46.786942546 +0000 UTC m=+0.148938106 container start ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hofstadter, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True) Oct 5 05:56:46 localhost podman[313967]: 2025-10-05 09:56:46.787167563 +0000 UTC m=+0.149163163 container attach ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hofstadter, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 05:56:46 localhost eager_hofstadter[313982]: 167 167 Oct 5 05:56:46 localhost systemd[1]: libpod-ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd.scope: Deactivated successfully. Oct 5 05:56:46 localhost podman[313967]: 2025-10-05 09:56:46.790928284 +0000 UTC m=+0.152923934 container died ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hofstadter, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, name=rhceph) Oct 5 05:56:46 localhost systemd[1]: tmp-crun.gk8zza.mount: Deactivated successfully. Oct 5 05:56:46 localhost systemd[1]: var-lib-containers-storage-overlay-9d243aa21fffd45f56cede85e712bc91d6903acbaa84e2af5c490f0959ca12f2-merged.mount: Deactivated successfully. Oct 5 05:56:46 localhost podman[313987]: 2025-10-05 09:56:46.887689742 +0000 UTC m=+0.088129826 container remove ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hofstadter, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7) Oct 5 05:56:46 localhost systemd[1]: libpod-conmon-ec763814b3300eeb4df8e43190d8338fd0b2ff1cfeeb785ae6859d4faba7d4fd.scope: Deactivated successfully. Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:47 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:56:47 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:56:47 localhost ceph-mon[302793]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:56:47 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:56:47 localhost podman[314062]: Oct 5 05:56:47 localhost podman[314062]: 2025-10-05 09:56:47.680181926 +0000 UTC m=+0.079220037 container create 82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wilson, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:56:47 localhost systemd[1]: Started libpod-conmon-82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682.scope. Oct 5 05:56:47 localhost systemd[1]: Started libcrun container. Oct 5 05:56:47 localhost podman[314062]: 2025-10-05 09:56:47.737951433 +0000 UTC m=+0.136989544 container init 82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wilson, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Oct 5 05:56:47 localhost podman[314062]: 2025-10-05 09:56:47.746863573 +0000 UTC m=+0.145901674 container start 82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wilson, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55) Oct 5 05:56:47 localhost podman[314062]: 2025-10-05 09:56:47.647458224 +0000 UTC m=+0.046496355 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:47 localhost podman[314062]: 2025-10-05 09:56:47.74712028 +0000 UTC m=+0.146158391 container attach 82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wilson, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=) Oct 5 05:56:47 localhost interesting_wilson[314077]: 167 167 Oct 5 05:56:47 localhost systemd[1]: libpod-82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682.scope: Deactivated successfully. Oct 5 05:56:47 localhost podman[314062]: 2025-10-05 09:56:47.749429823 +0000 UTC m=+0.148467954 container died 82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wilson, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:56:47 localhost podman[314082]: 2025-10-05 09:56:47.838065192 +0000 UTC m=+0.080360387 container remove 82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_wilson, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:56:47 localhost systemd[1]: libpod-conmon-82407fff3a86bc2214a623c7d1b7a6fac680deae342dd75b1c65299b81f0b682.scope: Deactivated successfully. Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:56:47 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:47 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:48 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:56:48 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:56:48 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:48 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:48 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:56:48 localhost podman[314152]: Oct 5 05:56:48 localhost podman[314152]: 2025-10-05 09:56:48.512967486 +0000 UTC m=+0.068332604 container create 4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_rosalind, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Oct 5 05:56:48 localhost systemd[1]: Started libpod-conmon-4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5.scope. Oct 5 05:56:48 localhost systemd[1]: Started libcrun container. Oct 5 05:56:48 localhost podman[314152]: 2025-10-05 09:56:48.573135998 +0000 UTC m=+0.128501116 container init 4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_rosalind, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:56:48 localhost podman[314152]: 2025-10-05 09:56:48.582982403 +0000 UTC m=+0.138347501 container start 4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_rosalind, com.redhat.component=rhceph-container, release=553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Oct 5 05:56:48 localhost podman[314152]: 2025-10-05 09:56:48.58323844 +0000 UTC m=+0.138603558 container attach 4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_rosalind, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:56:48 localhost admiring_rosalind[314167]: 167 167 Oct 5 05:56:48 localhost systemd[1]: libpod-4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5.scope: Deactivated successfully. Oct 5 05:56:48 localhost podman[314152]: 2025-10-05 09:56:48.585968404 +0000 UTC m=+0.141333522 container died 4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_rosalind, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, release=553) Oct 5 05:56:48 localhost podman[314152]: 2025-10-05 09:56:48.489673677 +0000 UTC m=+0.045038845 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:48 localhost podman[314172]: 2025-10-05 09:56:48.669467824 +0000 UTC m=+0.077732436 container remove 4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_rosalind, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.33.12, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True) Oct 5 05:56:48 localhost systemd[1]: libpod-conmon-4dcdd63bf9be080937d88b4584f6f94ec52ff4575c899a5ff1c8ae56ae2bf5a5.scope: Deactivated successfully. Oct 5 05:56:48 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:48 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:48 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:48 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:48 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:56:48 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:48 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:56:48 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:56:48 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:48 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:48 localhost systemd[1]: var-lib-containers-storage-overlay-b87a47c6664ebece97166014966758bdd4d2ff67a729b19ba2e87ecafe077632-merged.mount: Deactivated successfully. Oct 5 05:56:49 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471152.kbhlus (monmap changed)... Oct 5 05:56:49 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471152.kbhlus on np0005471152.localdomain Oct 5 05:56:49 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:49 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:49 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:56:49 localhost podman[314248]: Oct 5 05:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:56:49 localhost podman[314248]: 2025-10-05 09:56:49.410965193 +0000 UTC m=+0.085266539 container create 5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_swartz, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., version=7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:56:49 localhost systemd[1]: Started libpod-conmon-5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b.scope. Oct 5 05:56:49 localhost systemd[1]: Started libcrun container. Oct 5 05:56:49 localhost podman[314248]: 2025-10-05 09:56:49.377134421 +0000 UTC m=+0.051435807 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:56:49 localhost podman[314248]: 2025-10-05 09:56:49.477969909 +0000 UTC m=+0.152271305 container init 5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_swartz, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main) Oct 5 05:56:49 localhost podman[314248]: 2025-10-05 09:56:49.492597903 +0000 UTC m=+0.166899249 container start 5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_swartz, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:56:49 localhost podman[314248]: 2025-10-05 09:56:49.493017495 +0000 UTC m=+0.167318901 container attach 5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_swartz, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=) Oct 5 05:56:49 localhost optimistic_swartz[314285]: 167 167 Oct 5 05:56:49 localhost systemd[1]: libpod-5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b.scope: Deactivated successfully. Oct 5 05:56:49 localhost podman[314248]: 2025-10-05 09:56:49.495333847 +0000 UTC m=+0.169635193 container died 5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_swartz, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, io.buildah.version=1.33.12) Oct 5 05:56:49 localhost podman[314272]: 2025-10-05 09:56:49.506880738 +0000 UTC m=+0.088009343 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:56:49 localhost podman[314240]: 2025-10-05 09:56:49.411889628 +0000 UTC m=+0.106056490 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:56:49 localhost podman[314272]: 2025-10-05 09:56:49.516825417 +0000 UTC m=+0.097954022 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:56:49 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:56:49 localhost podman[314240]: 2025-10-05 09:56:49.546066515 +0000 UTC m=+0.240233347 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:56:49 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:56:49 localhost podman[314297]: 2025-10-05 09:56:49.635393873 +0000 UTC m=+0.131752373 container remove 5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_swartz, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:56:49 localhost systemd[1]: libpod-conmon-5ebb695afab9ed7c8fb3e7588edc8a4e23db3a9b166b73ac349c805e94de958b.scope: Deactivated successfully. Oct 5 05:56:49 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:49 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:49 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:49 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:49 localhost systemd[1]: var-lib-containers-storage-overlay-ae1adeabb9a81e9950fbade33bcc6c9ba088f13797e5712fe0b219a3fc48383e-merged.mount: Deactivated successfully. Oct 5 05:56:50 localhost ceph-mon[302793]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:56:50 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:56:50 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:50 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:51 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:51 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:51 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:56:51 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:51 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:51 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:51 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:51 localhost ceph-mon[302793]: Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:51 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:56:51 localhost ceph-mon[302793]: Removing np0005471148.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mon[302793]: Removing np0005471148.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:56:51 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:56:52 localhost systemd[1]: tmp-crun.EkmM4Z.mount: Deactivated successfully. Oct 5 05:56:52 localhost podman[314536]: 2025-10-05 09:56:52.091713859 +0000 UTC m=+0.085663340 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 05:56:52 localhost podman[314536]: 2025-10-05 09:56:52.108195164 +0000 UTC m=+0.102144635 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=) Oct 5 05:56:52 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:52 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:56:52 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:53 localhost ceph-mon[302793]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:53 localhost ceph-mon[302793]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:53 localhost ceph-mon[302793]: Removing daemon mgr.np0005471148.fayrer from np0005471148.localdomain -- ports [8765] Oct 5 05:56:54 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"} v 0) Oct 5 05:56:54 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"} : dispatch Oct 5 05:56:54 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"}]': finished Oct 5 05:56:54 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Oct 5 05:56:54 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:54 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Oct 5 05:56:54 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:54 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:56:54 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:56:55 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"} : dispatch Oct 5 05:56:55 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005471148.fayrer"}]': finished Oct 5 05:56:55 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:55 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:55 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 5 05:56:55 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:55 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 5 05:56:55 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost podman[248157]: time="2025-10-05T09:56:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:56:56 localhost podman[248157]: @ - - [05/Oct/2025:09:56:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:56:56 localhost podman[248157]: @ - - [05/Oct/2025:09:56:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19304 "" "Go-http-client/1.1" Oct 5 05:56:56 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:56:56 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:56 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:56 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:56 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:56 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:56:56 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:56 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:56:56 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: Removing key for mgr.np0005471148.fayrer Oct 5 05:56:56 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: Added label _no_schedule to host np0005471148.localdomain Oct 5 05:56:56 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005471148.localdomain Oct 5 05:56:56 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:56 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:56 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:57 localhost ceph-mon[302793]: Removing daemon crash.np0005471148 from np0005471148.localdomain -- ports [] Oct 5 05:56:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:56:57 localhost podman[314680]: 2025-10-05 09:56:57.911864416 +0000 UTC m=+0.082438404 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:56:57 localhost podman[314680]: 2025-10-05 09:56:57.918682579 +0000 UTC m=+0.089256557 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 05:56:57 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:56:58 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:56:58 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:58 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"} v 0) Oct 5 05:56:58 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"} : dispatch Oct 5 05:56:58 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"}]': finished Oct 5 05:56:58 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) Oct 5 05:56:58 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:58 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) Oct 5 05:56:58 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:58 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:56:58 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:56:59 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[302793]: Removing key for client.crash.np0005471148.localdomain Oct 5 05:56:59 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"} : dispatch Oct 5 05:56:59 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005471148.localdomain"}]': finished Oct 5 05:56:59 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain.devices.0}] v 0) Oct 5 05:56:59 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471148.localdomain}] v 0) Oct 5 05:56:59 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:56:59 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:56:59 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:56:59 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:56:59 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:56:59 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:56:59 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:56:59 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:57:00 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:00 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:00 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:00 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:00 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:00 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:00 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:00 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:00 localhost ceph-mon[302793]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:00 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:00 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"} v 0) Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"} : dispatch Oct 5 05:57:01 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"}]': finished Oct 5 05:57:01 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:57:01 localhost podman[314734]: 2025-10-05 09:57:01.908427132 +0000 UTC m=+0.072522106 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:57:01 localhost podman[314735]: 2025-10-05 09:57:01.969211281 +0000 UTC m=+0.130059647 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:57:01 localhost podman[314734]: 2025-10-05 09:57:01.99812021 +0000 UTC m=+0.162215264 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 05:57:02 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:57:02 localhost podman[314735]: 2025-10-05 09:57:02.052560808 +0000 UTC m=+0.213409204 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:57:02 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:57:02 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:02 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:02 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Oct 5 05:57:02 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:02 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:02 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[302793]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:02 localhost ceph-mon[302793]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"} : dispatch Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain"}]': finished Oct 5 05:57:02 localhost ceph-mon[302793]: Removed host np0005471148.localdomain Oct 5 05:57:02 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:03 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:03 localhost ceph-mon[302793]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:03 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:03 localhost ceph-mon[302793]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:03 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:03 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:03 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:03 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:03 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Oct 5 05:57:03 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:03 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:03 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:03 localhost nova_compute[297130]: 2025-10-05 09:57:03.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:03 localhost nova_compute[297130]: 2025-10-05 09:57:03.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 05:57:03 localhost nova_compute[297130]: 2025-10-05 09:57:03.295 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 05:57:03 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:57:03 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:03 localhost sshd[314777]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:57:03 localhost systemd-logind[760]: New session 72 of user tripleo-admin. Oct 5 05:57:03 localhost systemd[1]: Created slice User Slice of UID 1003. Oct 5 05:57:03 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Oct 5 05:57:03 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Oct 5 05:57:03 localhost systemd[1]: Starting User Manager for UID 1003... Oct 5 05:57:04 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:04 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:04 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Oct 5 05:57:04 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:04 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0) Oct 5 05:57:04 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mgr services"} : dispatch Oct 5 05:57:04 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:04 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:04 localhost systemd[314781]: Queued start job for default target Main User Target. Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:04 localhost systemd[314781]: Created slice User Application Slice. Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:04 localhost ceph-mon[302793]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:04 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:04 localhost systemd[314781]: Started Mark boot as successful after the user session has run 2 minutes. Oct 5 05:57:04 localhost systemd[314781]: Started Daily Cleanup of User's Temporary Directories. Oct 5 05:57:04 localhost systemd[314781]: Reached target Paths. Oct 5 05:57:04 localhost systemd[314781]: Reached target Timers. Oct 5 05:57:04 localhost systemd[314781]: Starting D-Bus User Message Bus Socket... Oct 5 05:57:04 localhost systemd[314781]: Starting Create User's Volatile Files and Directories... Oct 5 05:57:04 localhost systemd[314781]: Finished Create User's Volatile Files and Directories. Oct 5 05:57:04 localhost systemd[314781]: Listening on D-Bus User Message Bus Socket. Oct 5 05:57:04 localhost systemd[314781]: Reached target Sockets. Oct 5 05:57:04 localhost systemd[314781]: Reached target Basic System. Oct 5 05:57:04 localhost systemd[314781]: Reached target Main User Target. Oct 5 05:57:04 localhost systemd[314781]: Startup finished in 201ms. Oct 5 05:57:04 localhost systemd[1]: Started User Manager for UID 1003. Oct 5 05:57:04 localhost systemd[1]: Started Session 72 of User tripleo-admin. Oct 5 05:57:04 localhost nova_compute[297130]: 2025-10-05 09:57:04.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:04 localhost nova_compute[297130]: 2025-10-05 09:57:04.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:04 localhost nova_compute[297130]: 2025-10-05 09:57:04.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 05:57:04 localhost python3[314923]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:05 localhost ceph-mon[302793]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:05 localhost ceph-mon[302793]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:05 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:05 localhost nova_compute[297130]: 2025-10-05 09:57:05.286 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:05 localhost nova_compute[297130]: 2025-10-05 09:57:05.286 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:05 localhost nova_compute[297130]: 2025-10-05 09:57:05.287 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:57:05 localhost nova_compute[297130]: 2025-10-05 09:57:05.287 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:57:05 localhost nova_compute[297130]: 2025-10-05 09:57:05.349 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:57:05 localhost python3[315069]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:05 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:05 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:06 localhost ceph-mon[302793]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:57:06 localhost ceph-mon[302793]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:57:06 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:06 localhost nova_compute[297130]: 2025-10-05 09:57:06.329 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:06 localhost python3[315214]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:57:06 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:57:06 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:06 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:57:06 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:57:06 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:06 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:57:06 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:57:07 localhost systemd[1]: tmp-crun.KULML7.mount: Deactivated successfully. Oct 5 05:57:07 localhost podman[315234]: 2025-10-05 09:57:07.238762643 +0000 UTC m=+0.087398826 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:57:07 localhost podman[315234]: 2025-10-05 09:57:07.246249226 +0000 UTC m=+0.094885389 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:57:07 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:57:07 localhost nova_compute[297130]: 2025-10-05 09:57:07.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:07 localhost nova_compute[297130]: 2025-10-05 09:57:07.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:07 localhost systemd[1]: tmp-crun.sTr7H4.mount: Deactivated successfully. Oct 5 05:57:07 localhost podman[315235]: 2025-10-05 09:57:07.33695202 +0000 UTC m=+0.183497267 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:57:07 localhost podman[315235]: 2025-10-05 09:57:07.407182403 +0000 UTC m=+0.253727640 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 05:57:07 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:57:07 localhost ceph-mon[302793]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:07 localhost ceph-mon[302793]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:07 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:07 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:07 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:07 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.297 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.297 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.297 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:57:08 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:57:08 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:08 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:57:08 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2483755878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.744 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.951 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.953 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11928MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.954 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:08 localhost nova_compute[297130]: 2025-10-05 09:57:08.954 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.016 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.017 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.169 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.510 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.511 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.537 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.564 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 05:57:09 localhost nova_compute[297130]: 2025-10-05 09:57:09.590 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:57:09 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:10 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:57:10 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1062324572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.041 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.047 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.065 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.067 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.068 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:10 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Oct 5 05:57:10 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:10 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:57:10 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:57:10 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:57:10 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:10 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:57:10 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:10 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:57:10 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:57:10 localhost nova_compute[297130]: 2025-10-05 09:57:10.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:10 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:10 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:10 localhost ceph-mon[302793]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:11 localhost ceph-mon[302793]: mon.np0005471152@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.493204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231493381, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1989, "num_deletes": 251, "total_data_size": 3454544, "memory_usage": 3505664, "flush_reason": "Manual Compaction"} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231512251, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2315522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20413, "largest_seqno": 22397, "table_properties": {"data_size": 2307163, "index_size": 4672, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 23470, "raw_average_key_size": 22, "raw_value_size": 2288240, "raw_average_value_size": 2232, "num_data_blocks": 206, "num_entries": 1025, "num_filter_entries": 1025, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658182, "oldest_key_time": 1759658182, "file_creation_time": 1759658231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19105 microseconds, and 5772 cpu microseconds. Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.512305) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2315522 bytes OK Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.512328) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.514456) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.514478) EVENT_LOG_v1 {"time_micros": 1759658231514471, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.514503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3444966, prev total WAL file size 3445290, number of live WAL files 2. Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.515913) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373535' seq:72057594037927935, type:22 .. '6D6772737461740034303036' seq:0, type:0; will stop at (end) Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2261KB)], [30(17MB)] Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231515967, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 20160586, "oldest_snapshot_seqno": -1} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11692 keys, 18066456 bytes, temperature: kUnknown Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231628434, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 18066456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17999815, "index_size": 36332, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 312371, "raw_average_key_size": 26, "raw_value_size": 17800753, "raw_average_value_size": 1522, "num_data_blocks": 1383, "num_entries": 11692, "num_filter_entries": 11692, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759657951, "oldest_key_time": 0, "file_creation_time": 1759658231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0f9cfb4a-c800-498a-8c29-7c6387860712", "db_session_id": "9CM0VQKEVS9AVS76DTPQ", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.629004) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 18066456 bytes Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.630891) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.0 rd, 160.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 17.0 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(16.5) write-amplify(7.8) OK, records in: 12215, records dropped: 523 output_compression: NoCompression Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.630923) EVENT_LOG_v1 {"time_micros": 1759658231630908, "job": 16, "event": "compaction_finished", "compaction_time_micros": 112650, "compaction_time_cpu_micros": 48085, "output_level": 6, "num_output_files": 1, "total_output_size": 18066456, "num_input_records": 12215, "num_output_records": 11692, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231631909, "job": 16, "event": "table_file_deletion", "file_number": 32} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658231634707, "job": 16, "event": "table_file_deletion", "file_number": 30} Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.515802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[302793]: rocksdb: (Original Log Time 2025/10/05-09:57:11.634884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:11 localhost ceph-mon[302793]: Saving service mon spec with placement label:mon Oct 5 05:57:12 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "quorum_status"} v 0) Oct 5 05:57:12 localhost ceph-mon[302793]: log_channel(audit) log [DBG] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "quorum_status"} : dispatch Oct 5 05:57:12 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e13 handle_command mon_command({"prefix": "mon rm", "name": "np0005471152"} v 0) Oct 5 05:57:12 localhost ceph-mon[302793]: log_channel(audit) log [INF] : from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "mon rm", "name": "np0005471152"} : dispatch Oct 5 05:57:12 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53b600 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Oct 5 05:57:12 localhost ceph-mon[302793]: mon.np0005471152@0(leader) e14 removed from monmap, suicide. Oct 5 05:57:12 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:57:12 localhost ceph-mgr[301363]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Oct 5 05:57:12 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53b080 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Oct 5 05:57:13 localhost podman[315368]: 2025-10-05 09:57:13.030284397 +0000 UTC m=+0.052848545 container died 3155c6e2151277c3bdbfc98ee729963a9a26ec3b0c5c1f0f486450354e49ff99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container) Oct 5 05:57:13 localhost systemd[1]: var-lib-containers-storage-overlay-c76f6cdeaf6d3bb2c079cf60de79aa0601066cfa0cab34e002cc066d9ba465d0-merged.mount: Deactivated successfully. Oct 5 05:57:13 localhost podman[315368]: 2025-10-05 09:57:13.070538513 +0000 UTC m=+0.093102621 container remove 3155c6e2151277c3bdbfc98ee729963a9a26ec3b0c5c1f0f486450354e49ff99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:14 localhost systemd[1]: ceph-659062ac-50b4-5607-b699-3105da7f55ee@mon.np0005471152.service: Deactivated successfully. Oct 5 05:57:14 localhost systemd[1]: Stopped Ceph mon.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:57:14 localhost systemd[1]: ceph-659062ac-50b4-5607-b699-3105da7f55ee@mon.np0005471152.service: Consumed 11.725s CPU time. Oct 5 05:57:14 localhost systemd[1]: Reloading. Oct 5 05:57:14 localhost systemd-rc-local-generator[315761]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:57:14 localhost systemd-sysv-generator[315764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:57:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:57:16 localhost openstack_network_exporter[250246]: ERROR 09:57:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:57:16 localhost openstack_network_exporter[250246]: ERROR 09:57:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:16 localhost openstack_network_exporter[250246]: ERROR 09:57:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:16 localhost openstack_network_exporter[250246]: ERROR 09:57:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:57:16 localhost openstack_network_exporter[250246]: Oct 5 05:57:16 localhost openstack_network_exporter[250246]: ERROR 09:57:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:57:16 localhost openstack_network_exporter[250246]: Oct 5 05:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:57:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:57:19 localhost podman[315881]: 2025-10-05 09:57:19.910718166 +0000 UTC m=+0.077822419 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:57:19 localhost podman[315881]: 2025-10-05 09:57:19.919871783 +0000 UTC m=+0.086976026 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:57:19 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:57:19 localhost podman[315880]: 2025-10-05 09:57:19.963362195 +0000 UTC m=+0.130302393 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:57:19 localhost podman[315880]: 2025-10-05 09:57:19.977131206 +0000 UTC m=+0.144071384 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:57:19 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:57:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:57:20.394 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:57:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:57:20.395 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:57:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:57:20.395 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:57:22 localhost podman[315920]: 2025-10-05 09:57:22.91318699 +0000 UTC m=+0.077270184 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41) Oct 5 05:57:22 localhost podman[315920]: 2025-10-05 09:57:22.957157825 +0000 UTC m=+0.121241049 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9) Oct 5 05:57:22 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:57:25 localhost podman[315994]: Oct 5 05:57:25 localhost podman[315994]: 2025-10-05 09:57:25.528891336 +0000 UTC m=+0.076344458 container create 655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_haslett, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, release=553, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, build-date=2025-09-24T08:57:55) Oct 5 05:57:25 localhost systemd[1]: Started libpod-conmon-655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a.scope. Oct 5 05:57:25 localhost systemd[1]: Started libcrun container. Oct 5 05:57:25 localhost podman[315994]: 2025-10-05 09:57:25.499055854 +0000 UTC m=+0.046509026 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:25 localhost podman[315994]: 2025-10-05 09:57:25.603258218 +0000 UTC m=+0.150711350 container init 655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_haslett, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, GIT_BRANCH=main, name=rhceph, ceph=True, io.buildah.version=1.33.12, release=553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:25 localhost podman[315994]: 2025-10-05 09:57:25.613953226 +0000 UTC m=+0.161406358 container start 655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_haslett, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=553, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7) Oct 5 05:57:25 localhost podman[315994]: 2025-10-05 09:57:25.614231143 +0000 UTC m=+0.161684305 container attach 655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_haslett, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph) Oct 5 05:57:25 localhost fervent_haslett[316009]: 167 167 Oct 5 05:57:25 localhost systemd[1]: libpod-655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a.scope: Deactivated successfully. Oct 5 05:57:25 localhost podman[315994]: 2025-10-05 09:57:25.618744959 +0000 UTC m=+0.166198151 container died 655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_haslett, build-date=2025-09-24T08:57:55, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, architecture=x86_64, ceph=True, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:25 localhost podman[316014]: 2025-10-05 09:57:25.713065088 +0000 UTC m=+0.083494318 container remove 655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_haslett, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., architecture=x86_64) Oct 5 05:57:25 localhost systemd[1]: libpod-conmon-655f2f27caa6ae5edb0c8faef484e7cdf199bde791cd5b39b9fbde842423185a.scope: Deactivated successfully. Oct 5 05:57:26 localhost podman[248157]: time="2025-10-05T09:57:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:57:26 localhost podman[248157]: @ - - [05/Oct/2025:09:57:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144122 "" "Go-http-client/1.1" Oct 5 05:57:26 localhost podman[248157]: @ - - [05/Oct/2025:09:57:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18818 "" "Go-http-client/1.1" Oct 5 05:57:26 localhost podman[316119]: Oct 5 05:57:26 localhost podman[316119]: 2025-10-05 09:57:26.441294687 +0000 UTC m=+0.074492647 container create 8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_curran, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., release=553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Oct 5 05:57:26 localhost systemd[1]: Started libpod-conmon-8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5.scope. Oct 5 05:57:26 localhost systemd[1]: Started libcrun container. Oct 5 05:57:26 localhost podman[316119]: 2025-10-05 09:57:26.500811785 +0000 UTC m=+0.134009745 container init 8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_curran, name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55) Oct 5 05:57:26 localhost podman[316119]: 2025-10-05 09:57:26.410161709 +0000 UTC m=+0.043359699 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:26 localhost podman[316119]: 2025-10-05 09:57:26.511996686 +0000 UTC m=+0.145194636 container start 8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_curran, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-type=git, name=rhceph) Oct 5 05:57:26 localhost podman[316119]: 2025-10-05 09:57:26.512213552 +0000 UTC m=+0.145411502 container attach 8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_curran, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Oct 5 05:57:26 localhost eloquent_curran[316136]: 167 167 Oct 5 05:57:26 localhost systemd[1]: libpod-8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5.scope: Deactivated successfully. Oct 5 05:57:26 localhost podman[316119]: 2025-10-05 09:57:26.515155904 +0000 UTC m=+0.148353884 container died 8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_curran, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Oct 5 05:57:26 localhost systemd[1]: var-lib-containers-storage-overlay-e29112b033105fca0d3af07dd87ac5749bf06a1551d6fb990c5d7ac8b200c423-merged.mount: Deactivated successfully. Oct 5 05:57:26 localhost systemd[1]: var-lib-containers-storage-overlay-c78287bec6d49e95dc327ba6a9cbcb8a874440d704cff11deda6bc95f2019152-merged.mount: Deactivated successfully. Oct 5 05:57:26 localhost podman[316151]: 2025-10-05 09:57:26.617673321 +0000 UTC m=+0.090147254 container remove 8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_curran, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:26 localhost systemd[1]: libpod-conmon-8afdf1c371ee8fe775c72be2dcbd867cc985a55592e9e8bedc0f3a9278d686f5.scope: Deactivated successfully. Oct 5 05:57:27 localhost podman[316241]: Oct 5 05:57:27 localhost podman[316241]: 2025-10-05 09:57:27.051506358 +0000 UTC m=+0.065909708 container create 1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_cray, release=553, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:27 localhost systemd[1]: Started libpod-conmon-1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2.scope. Oct 5 05:57:27 localhost systemd[1]: Started libcrun container. Oct 5 05:57:27 localhost podman[316241]: 2025-10-05 09:57:27.118906915 +0000 UTC m=+0.133310235 container init 1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_cray, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Oct 5 05:57:27 localhost podman[316241]: 2025-10-05 09:57:27.022784797 +0000 UTC m=+0.037188137 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:27 localhost elated_cray[316258]: 167 167 Oct 5 05:57:27 localhost systemd[1]: libpod-1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2.scope: Deactivated successfully. Oct 5 05:57:27 localhost podman[316241]: 2025-10-05 09:57:27.130671003 +0000 UTC m=+0.145074403 container start 1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_cray, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=) Oct 5 05:57:27 localhost podman[316241]: 2025-10-05 09:57:27.131132926 +0000 UTC m=+0.145536256 container attach 1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_cray, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Oct 5 05:57:27 localhost podman[316241]: 2025-10-05 09:57:27.151557785 +0000 UTC m=+0.165961155 container died 1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_cray, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, io.openshift.tags=rhceph ceph) Oct 5 05:57:27 localhost podman[316263]: 2025-10-05 09:57:27.218640324 +0000 UTC m=+0.078552180 container remove 1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_cray, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhceph, distribution-scope=public, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:57:27 localhost systemd[1]: libpod-conmon-1c6e483419543fab5e79de4f70c090ba045c0e181c1e33c340096622363fccb2.scope: Deactivated successfully. Oct 5 05:57:27 localhost podman[316279]: Oct 5 05:57:27 localhost podman[316279]: 2025-10-05 09:57:27.307039267 +0000 UTC m=+0.055386724 container create 15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_moore, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, version=7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:27 localhost systemd[1]: Started libpod-conmon-15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc.scope. Oct 5 05:57:27 localhost systemd[1]: Started libcrun container. Oct 5 05:57:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba06d0da9baa7a682feab506f440d63dc6837fcc035c302e6b982c16f71d0da5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba06d0da9baa7a682feab506f440d63dc6837fcc035c302e6b982c16f71d0da5/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba06d0da9baa7a682feab506f440d63dc6837fcc035c302e6b982c16f71d0da5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba06d0da9baa7a682feab506f440d63dc6837fcc035c302e6b982c16f71d0da5/merged/var/lib/ceph/mon/ceph-np0005471152 supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:27 localhost podman[316279]: 2025-10-05 09:57:27.367504101 +0000 UTC m=+0.115851578 container init 15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_moore, release=553, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main) Oct 5 05:57:27 localhost podman[316279]: 2025-10-05 09:57:27.376262975 +0000 UTC m=+0.124610442 container start 15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_moore, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc.) Oct 5 05:57:27 localhost podman[316279]: 2025-10-05 09:57:27.376711108 +0000 UTC m=+0.125058635 container attach 15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_moore, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph) Oct 5 05:57:27 localhost podman[316279]: 2025-10-05 09:57:27.282927645 +0000 UTC m=+0.031275142 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:27 localhost systemd[1]: libpod-15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc.scope: Deactivated successfully. Oct 5 05:57:27 localhost podman[316279]: 2025-10-05 09:57:27.457620001 +0000 UTC m=+0.205967498 container died 15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_moore, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, ceph=True, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:27 localhost systemd[1]: var-lib-containers-storage-overlay-90a34800e04f04c5bd65fc35ff78d1f953f67e02cb0b256dc79f1ac8840a228f-merged.mount: Deactivated successfully. Oct 5 05:57:27 localhost systemd[1]: var-lib-containers-storage-overlay-ba06d0da9baa7a682feab506f440d63dc6837fcc035c302e6b982c16f71d0da5-merged.mount: Deactivated successfully. Oct 5 05:57:27 localhost podman[316332]: 2025-10-05 09:57:27.55375014 +0000 UTC m=+0.082092288 container remove 15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_moore, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=553, architecture=x86_64, distribution-scope=public, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:57:27 localhost systemd[1]: libpod-conmon-15d05d423501eeeee24aafeafdbe4ba765cc08cb742e9c6629f655a2f711b2fc.scope: Deactivated successfully. Oct 5 05:57:27 localhost systemd[1]: Reloading. Oct 5 05:57:27 localhost systemd-rc-local-generator[316369]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:57:27 localhost systemd-sysv-generator[316374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:57:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:57:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:57:28 localhost systemd[1]: Reloading. Oct 5 05:57:28 localhost podman[316384]: 2025-10-05 09:57:28.054747539 +0000 UTC m=+0.090537784 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 05:57:28 localhost podman[316384]: 2025-10-05 09:57:28.100249086 +0000 UTC m=+0.136039321 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent) Oct 5 05:57:28 localhost systemd-rc-local-generator[316431]: /etc/rc.d/rc.local is not marked executable, skipping. Oct 5 05:57:28 localhost systemd-sysv-generator[316434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Oct 5 05:57:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 5 05:57:28 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:57:28 localhost systemd[1]: Starting Ceph mon.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee... Oct 5 05:57:28 localhost podman[316493]: Oct 5 05:57:28 localhost podman[316493]: 2025-10-05 09:57:28.743916809 +0000 UTC m=+0.098315200 container create 1095641a75f701b734f01e43df5210c51a3139e504c7715005d32ab74fc1a0cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:28 localhost podman[316493]: 2025-10-05 09:57:28.690376478 +0000 UTC m=+0.044774919 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880e5ecc6894bdca6bfc5fb447c66c34187af472058c248ae5a7e6d57397b31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880e5ecc6894bdca6bfc5fb447c66c34187af472058c248ae5a7e6d57397b31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880e5ecc6894bdca6bfc5fb447c66c34187af472058c248ae5a7e6d57397b31e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/880e5ecc6894bdca6bfc5fb447c66c34187af472058c248ae5a7e6d57397b31e/merged/var/lib/ceph/mon/ceph-np0005471152 supports timestamps until 2038 (0x7fffffff) Oct 5 05:57:28 localhost podman[316493]: 2025-10-05 09:57:28.802589924 +0000 UTC m=+0.156988315 container init 1095641a75f701b734f01e43df5210c51a3139e504c7715005d32ab74fc1a0cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, version=7, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:57:28 localhost podman[316493]: 2025-10-05 09:57:28.813188499 +0000 UTC m=+0.167586890 container start 1095641a75f701b734f01e43df5210c51a3139e504c7715005d32ab74fc1a0cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-mon-np0005471152, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Oct 5 05:57:28 localhost bash[316493]: 1095641a75f701b734f01e43df5210c51a3139e504c7715005d32ab74fc1a0cb Oct 5 05:57:28 localhost systemd[1]: Started Ceph mon.np0005471152 for 659062ac-50b4-5607-b699-3105da7f55ee. Oct 5 05:57:28 localhost ceph-mon[316511]: set uid:gid to 167:167 (ceph:ceph) Oct 5 05:57:28 localhost ceph-mon[316511]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Oct 5 05:57:28 localhost ceph-mon[316511]: pidfile_write: ignore empty --pid-file Oct 5 05:57:28 localhost ceph-mon[316511]: load: jerasure load: lrc Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: RocksDB version: 7.9.2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Git sha 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Compile date 2025-09-23 00:00:00 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: DB SUMMARY Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: DB Session ID: F5HXXNFJ1JNSSRYMZ5WS Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: CURRENT file: CURRENT Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: IDENTITY file: IDENTITY Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005471152/store.db dir, Total Num: 0, files: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005471152/store.db: 000004.log size: 636 ; Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.error_if_exists: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.create_if_missing: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.paranoid_checks: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.flush_verify_memtable_count: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.env: 0x5603a5c149e0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.fs: PosixFileSystem Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.info_log: 0x5603a6bb4d20 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_file_opening_threads: 16 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.statistics: (nil) Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.use_fsync: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_log_file_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_manifest_file_size: 1073741824 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.log_file_time_to_roll: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.keep_log_file_num: 1000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.recycle_log_file_num: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.allow_fallocate: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.allow_mmap_reads: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.allow_mmap_writes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.use_direct_reads: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.create_missing_column_families: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.db_log_dir: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.wal_dir: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.table_cache_numshardbits: 6 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.WAL_ttl_seconds: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.WAL_size_limit_MB: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.manifest_preallocation_size: 4194304 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.is_fd_close_on_exec: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.advise_random_on_open: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.db_write_buffer_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.write_buffer_manager: 0x5603a6bc5540 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.access_hint_on_compaction_start: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.random_access_max_buffer_size: 1048576 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.use_adaptive_mutex: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.rate_limiter: (nil) Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.wal_recovery_mode: 2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.enable_thread_tracking: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.enable_pipelined_write: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.unordered_write: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.allow_concurrent_memtable_write: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.write_thread_max_yield_usec: 100 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.write_thread_slow_yield_usec: 3 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.row_cache: None Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.wal_filter: None Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.avoid_flush_during_recovery: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.allow_ingest_behind: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.two_write_queues: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.manual_wal_flush: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.wal_compression: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.atomic_flush: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.persist_stats_to_disk: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.write_dbid_to_manifest: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.log_readahead_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.file_checksum_gen_factory: Unknown Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.best_efforts_recovery: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.allow_data_in_errors: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.db_host_id: __hostname__ Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.enforce_single_del_contracts: true Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_background_jobs: 2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_background_compactions: -1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_subcompactions: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.avoid_flush_during_shutdown: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.delayed_write_rate : 16777216 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_total_wal_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.stats_dump_period_sec: 600 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.stats_persist_period_sec: 600 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.stats_history_buffer_size: 1048576 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_open_files: -1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bytes_per_sync: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.wal_bytes_per_sync: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.strict_bytes_per_sync: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_readahead_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_background_flushes: -1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Compression algorithms supported: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kZSTD supported: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kXpressCompression supported: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kBZip2Compression supported: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kLZ4Compression supported: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kZlibCompression supported: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kLZ4HCCompression supported: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: #011kSnappyCompression supported: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Fast CRC32 supported: Supported on x86 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: DMutex implementation: pthread_mutex_t Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005471152/store.db/MANIFEST-000005 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.comparator: leveldb.BytewiseComparator Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.merge_operator: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_filter: None Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_filter_factory: None Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.sst_partitioner_factory: None Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.memtable_factory: SkipListFactory Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.table_factory: BlockBasedTable Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a6bb4980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5603a6bb1350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.write_buffer_size: 33554432 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_write_buffer_number: 2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression: NoCompression Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression: Disabled Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.prefix_extractor: nullptr Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.num_levels: 7 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.level: 32767 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.enabled: false Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.window_bits: -14 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.level: 32767 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.strategy: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.parallel_threads: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.enabled: false Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.level0_stop_writes_trigger: 36 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.target_file_size_base: 67108864 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.target_file_size_multiplier: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_base: 268435456 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_compaction_bytes: 1677721600 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.arena_block_size: 1048576 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.disable_auto_compactions: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_style: kCompactionStyleLevel Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.table_properties_collectors: Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.inplace_update_support: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.inplace_update_num_locks: 10000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.memtable_whole_key_filtering: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.memtable_huge_page_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.bloom_locality: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.max_successive_merges: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.optimize_filters_for_hits: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.paranoid_file_checks: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.force_consistency_checks: 1 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.report_bg_io_stats: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.ttl: 2592000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.periodic_compaction_seconds: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.preclude_last_level_data_seconds: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.preserve_internal_time_seconds: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.enable_blob_files: false Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.min_blob_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.blob_file_size: 268435456 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.blob_compression_type: NoCompression Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.enable_blob_garbage_collection: false Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.blob_compaction_readahead_size: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.blob_file_starting_level: 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005471152/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 09f88e28-27a5-4ad9-a669-134d4123f6f8 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658248860746, "job": 1, "event": "recovery_started", "wal_files": [4]} Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658248862944, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658248863054, "job": 1, "event": "recovery_finished"} Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5603a6bd8e00 Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: DB pointer 0x5603a6cce000 Oct 5 05:57:28 localhost ceph-mon[316511]: mon.np0005471152 does not exist in monmap, will attempt to join an existing cluster Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 05:57:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.72 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.72 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a6bb1350#2 capacity: 512.00 MB usage: 0.98 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.77 KB,0.000146031%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 05:57:28 localhost ceph-mon[316511]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] Oct 5 05:57:28 localhost ceph-mon[316511]: starting mon.np0005471152 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005471152 fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:57:28 localhost ceph-mon[316511]: mon.np0005471152@-1(???) e0 preinit fsid 659062ac-50b4-5607-b699-3105da7f55ee Oct 5 05:57:28 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing) e14 sync_obtain_latest_monmap Oct 5 05:57:28 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing) e14 sync_obtain_latest_monmap obtained monmap e14 Oct 5 05:57:28 localhost podman[316554]: Oct 5 05:57:28 localhost podman[316554]: 2025-10-05 09:57:28.961856151 +0000 UTC m=+0.070568677 container create addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hertz, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, release=553, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main) Oct 5 05:57:29 localhost systemd[1]: Started libpod-conmon-addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790.scope. Oct 5 05:57:29 localhost systemd[1]: Started libcrun container. Oct 5 05:57:29 localhost podman[316554]: 2025-10-05 09:57:28.928254534 +0000 UTC m=+0.036967110 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:29 localhost podman[316554]: 2025-10-05 09:57:29.037082137 +0000 UTC m=+0.145794663 container init addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hertz, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, release=553, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-type=git, version=7, ceph=True) Oct 5 05:57:29 localhost podman[316554]: 2025-10-05 09:57:29.049466972 +0000 UTC m=+0.158179488 container start addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hertz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, version=7) Oct 5 05:57:29 localhost podman[316554]: 2025-10-05 09:57:29.049950576 +0000 UTC m=+0.158663142 container attach addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hertz, name=rhceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, ceph=True) Oct 5 05:57:29 localhost clever_hertz[316569]: 167 167 Oct 5 05:57:29 localhost systemd[1]: libpod-addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790.scope: Deactivated successfully. Oct 5 05:57:29 localhost podman[316554]: 2025-10-05 09:57:29.056981911 +0000 UTC m=+0.165694517 container died addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hertz, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).mds e16 new map Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-05T08:04:17.819317+0000#012modified#0112025-10-05T09:51:24.604984+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01180#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26863}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26863 members: 26863#012[mds.mds.np0005471152.pozuqw{0:26863} state up:active seq 14 addr [v2:172.18.0.108:6808/114949388,v1:172.18.0.108:6809/114949388] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005471151.uyxcpj{-1:17211} state up:standby seq 1 addr [v2:172.18.0.107:6808/3905827397,v1:172.18.0.107:6809/3905827397] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005471150.bsiqok{-1:17217} state up:standby seq 1 addr [v2:172.18.0.106:6808/1854153836,v1:172.18.0.106:6809/1854153836] compat {c=[1],r=[1],i=[17ff]}] Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).osd e85 crush map has features 3314933000854323200, adjusting msgr requires Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).osd e85 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).osd e85 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).osd e85 crush map has features 432629239337189376, adjusting msgr requires Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Saving service mon spec with placement label:mon Oct 5 05:57:29 localhost ceph-mon[316511]: Remove daemons mon.np0005471152 Oct 5 05:57:29 localhost ceph-mon[316511]: Safe to remove mon.np0005471152: new quorum should be ['np0005471150', 'np0005471151'] (from ['np0005471150', 'np0005471151']) Oct 5 05:57:29 localhost ceph-mon[316511]: Removing monitor np0005471152 from monmap... Oct 5 05:57:29 localhost ceph-mon[316511]: Removing daemon mon.np0005471152 from np0005471152.localdomain -- ports [] Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471151 calling monitor election Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471150 calling monitor election Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471150 is new leader, mons np0005471150,np0005471151 in quorum (ranks 0,1) Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: overall HEALTH_OK Oct 5 05:57:29 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:29 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:29 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:29 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:29 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: Deploying daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:29 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:57:29 localhost ceph-mon[316511]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:57:29 localhost ceph-mon[316511]: mon.np0005471152@-1(synchronizing).paxosservice(auth 1..41) refresh upgraded, format 0 -> 3 Oct 5 05:57:29 localhost podman[316575]: 2025-10-05 09:57:29.160808304 +0000 UTC m=+0.088236599 container remove addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hertz, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:29 localhost systemd[1]: libpod-conmon-addc155f16721e32e35330cc6d9a40405c9518df4fa84361ad65127a25d8b790.scope: Deactivated successfully. Oct 5 05:57:29 localhost ceph-mgr[301363]: ms_deliver_dispatch: unhandled message 0x562dbe53af20 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Oct 5 05:57:29 localhost systemd[1]: var-lib-containers-storage-overlay-2b720797fc85b1b102c914ac458483bc6ad631401934bd87f87f4e3517ecab0a-merged.mount: Deactivated successfully. Oct 5 05:57:31 localhost ceph-mon[316511]: mon.np0005471152@-1(probing) e15 my rank is now 2 (was -1) Oct 5 05:57:31 localhost ceph-mon[316511]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:57:31 localhost ceph-mon[316511]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Oct 5 05:57:31 localhost ceph-mon[316511]: mon.np0005471152@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:57:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:57:32 localhost podman[316600]: 2025-10-05 09:57:32.917970422 +0000 UTC m=+0.085437351 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:57:32 localhost podman[316600]: 2025-10-05 09:57:32.931015375 +0000 UTC m=+0.098482304 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:57:32 localhost podman[316599]: 2025-10-05 09:57:32.963433879 +0000 UTC m=+0.131684560 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 05:57:32 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:57:33 localhost podman[316599]: 2025-10-05 09:57:33.001268103 +0000 UTC m=+0.169518754 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true) Oct 5 05:57:33 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:57:33 localhost nova_compute[297130]: 2025-10-05 09:57:33.054 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:57:34 localhost podman[316695]: Oct 5 05:57:34 localhost podman[316695]: 2025-10-05 09:57:34.88980836 +0000 UTC m=+0.079858796 container create a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_booth, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc.) Oct 5 05:57:34 localhost systemd[1]: Started libpod-conmon-a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa.scope. Oct 5 05:57:34 localhost systemd[1]: Started libcrun container. Oct 5 05:57:34 localhost podman[316695]: 2025-10-05 09:57:34.856637226 +0000 UTC m=+0.046687702 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:34 localhost podman[316695]: 2025-10-05 09:57:34.95980137 +0000 UTC m=+0.149851816 container init a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_booth, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Oct 5 05:57:34 localhost podman[316695]: 2025-10-05 09:57:34.971190197 +0000 UTC m=+0.161240633 container start a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_booth, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Oct 5 05:57:34 localhost podman[316695]: 2025-10-05 09:57:34.971843785 +0000 UTC m=+0.161894271 container attach a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_booth, io.openshift.tags=rhceph ceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:57:34 localhost nostalgic_booth[316710]: 167 167 Oct 5 05:57:34 localhost systemd[1]: libpod-a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa.scope: Deactivated successfully. Oct 5 05:57:34 localhost podman[316695]: 2025-10-05 09:57:34.975867557 +0000 UTC m=+0.165917993 container died a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_booth, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Oct 5 05:57:35 localhost podman[316715]: 2025-10-05 09:57:35.07109499 +0000 UTC m=+0.084679760 container remove a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_booth, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, name=rhceph, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:35 localhost systemd[1]: libpod-conmon-a7a35a1c970fecd8e48e21ef71a50e56cff2d3ec06b97c457d36bc629db16daa.scope: Deactivated successfully. Oct 5 05:57:35 localhost podman[316784]: Oct 5 05:57:35 localhost podman[316784]: 2025-10-05 09:57:35.780985389 +0000 UTC m=+0.065876467 container create 47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_matsumoto, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.33.12, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:57:35 localhost systemd[1]: Started libpod-conmon-47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4.scope. Oct 5 05:57:35 localhost systemd[1]: Started libcrun container. Oct 5 05:57:35 localhost podman[316784]: 2025-10-05 09:57:35.839722565 +0000 UTC m=+0.124613643 container init 47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_matsumoto, ceph=True, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, release=553, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:35 localhost podman[316784]: 2025-10-05 09:57:35.74838168 +0000 UTC m=+0.033272848 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:35 localhost podman[316784]: 2025-10-05 09:57:35.849145347 +0000 UTC m=+0.134036425 container start 47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_matsumoto, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Oct 5 05:57:35 localhost podman[316784]: 2025-10-05 09:57:35.849520128 +0000 UTC m=+0.134411216 container attach 47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_matsumoto, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=553, name=rhceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 05:57:35 localhost tender_matsumoto[316799]: 167 167 Oct 5 05:57:35 localhost systemd[1]: libpod-47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4.scope: Deactivated successfully. Oct 5 05:57:35 localhost podman[316784]: 2025-10-05 09:57:35.852247984 +0000 UTC m=+0.137139092 container died 47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_matsumoto, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, vcs-type=git, name=rhceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, version=7) Oct 5 05:57:35 localhost systemd[1]: var-lib-containers-storage-overlay-35a46cb9c7c791399142872872eb2f108f740c8f4ab3e17cae4a70734ecfcd88-merged.mount: Deactivated successfully. Oct 5 05:57:35 localhost systemd[1]: tmp-crun.lnS5Ym.mount: Deactivated successfully. Oct 5 05:57:35 localhost systemd[1]: var-lib-containers-storage-overlay-d474c4583aa18c7c99225f5560f55be6fab9fd514a26128fe9381ca3ef85c6cc-merged.mount: Deactivated successfully. Oct 5 05:57:35 localhost podman[316804]: 2025-10-05 09:57:35.94796778 +0000 UTC m=+0.087853988 container remove 47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_matsumoto, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7) Oct 5 05:57:35 localhost systemd[1]: libpod-conmon-47334efe6a6fb928edc285afa52f34622230f81f6f20cd683be045deace405a4.scope: Deactivated successfully. Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471150 calling monitor election Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471151 calling monitor election Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471150 is new leader, mons np0005471150,np0005471151 in quorum (ranks 0,1) Oct 5 05:57:36 localhost ceph-mon[316511]: Health check failed: 1/3 mons down, quorum np0005471150,np0005471151 (MON_DOWN) Oct 5 05:57:36 localhost ceph-mon[316511]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005471150,np0005471151 Oct 5 05:57:36 localhost ceph-mon[316511]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005471150,np0005471151 Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Oct 5 05:57:36 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:36 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:36 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471152.pozuqw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:36 localhost ceph-mon[316511]: Reconfiguring mds.mds.np0005471152.pozuqw (monmap changed)... Oct 5 05:57:36 localhost ceph-mon[316511]: Reconfiguring daemon mds.mds.np0005471152.pozuqw on np0005471152.localdomain Oct 5 05:57:36 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:36 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:36 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471152.kbhlus", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.254929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256255040, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12066, "num_deletes": 261, "total_data_size": 16008241, "memory_usage": 16715736, "flush_reason": "Manual Compaction"} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Oct 5 05:57:36 localhost ceph-mon[316511]: log_channel(cluster) log [INF] : mon.np0005471152 calling monitor election Oct 5 05:57:36 localhost ceph-mon[316511]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Oct 5 05:57:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Oct 5 05:57:36 localhost ceph-mon[316511]: mgrc update_daemon_metadata mon.np0005471152 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005471152.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005471152.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256318033, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 15785026, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12071, "table_properties": {"data_size": 15719886, "index_size": 34634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29445, "raw_key_size": 315228, "raw_average_key_size": 26, "raw_value_size": 15521475, "raw_average_value_size": 1318, "num_data_blocks": 1306, "num_entries": 11770, "num_filter_entries": 11770, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 1759658248, "file_creation_time": 1759658256, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 63181 microseconds, and 30964 cpu microseconds. Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.318113) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 15785026 bytes OK Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.318147) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.321030) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.321056) EVENT_LOG_v1 {"time_micros": 1759658256321048, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.321076) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 15923830, prev total WAL file size 15928491, number of live WAL files 2. Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.323522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373634' seq:72057594037927935, type:22 .. '6C6F676D0034303135' seq:0, type:0; will stop at (end) Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(15MB) 8(1762B)] Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256323643, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 15786788, "oldest_snapshot_seqno": -1} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 11517 keys, 15781633 bytes, temperature: kUnknown Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256396010, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 15781633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15717148, "index_size": 34624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 310895, "raw_average_key_size": 26, "raw_value_size": 15522001, "raw_average_value_size": 1347, "num_data_blocks": 1306, "num_entries": 11517, "num_filter_entries": 11517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658256, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.396385) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 15781633 bytes Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.398454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.9 rd, 217.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(15.1, 0.0 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 11775, records dropped: 258 output_compression: NoCompression Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.398487) EVENT_LOG_v1 {"time_micros": 1759658256398473, "job": 4, "event": "compaction_finished", "compaction_time_micros": 72449, "compaction_time_cpu_micros": 42191, "output_level": 6, "num_output_files": 1, "total_output_size": 15781633, "num_input_records": 11775, "num_output_records": 11517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256401400, "job": 4, "event": "table_file_deletion", "file_number": 14} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658256401459, "job": 4, "event": "table_file_deletion", "file_number": 8} Oct 5 05:57:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:57:36.323363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:57:36 localhost podman[316932]: 2025-10-05 09:57:36.967433204 +0000 UTC m=+0.072721638 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=553, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7) Oct 5 05:57:37 localhost podman[316932]: 2025-10-05 09:57:37.094878485 +0000 UTC m=+0.200166939 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc.) Oct 5 05:57:37 localhost ceph-mon[316511]: mon.np0005471152 calling monitor election Oct 5 05:57:37 localhost ceph-mon[316511]: mon.np0005471152 calling monitor election Oct 5 05:57:37 localhost ceph-mon[316511]: mon.np0005471150 calling monitor election Oct 5 05:57:37 localhost ceph-mon[316511]: mon.np0005471151 calling monitor election Oct 5 05:57:37 localhost ceph-mon[316511]: mon.np0005471150 is new leader, mons np0005471150,np0005471151,np0005471152 in quorum (ranks 0,1,2) Oct 5 05:57:37 localhost ceph-mon[316511]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005471150,np0005471151) Oct 5 05:57:37 localhost ceph-mon[316511]: Cluster is now healthy Oct 5 05:57:37 localhost ceph-mon[316511]: overall HEALTH_OK Oct 5 05:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:57:37 localhost podman[317004]: 2025-10-05 09:57:37.491842644 +0000 UTC m=+0.109709788 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001) Oct 5 05:57:37 localhost podman[317004]: 2025-10-05 09:57:37.524294528 +0000 UTC m=+0.142161672 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:57:37 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:57:37 localhost podman[317036]: 2025-10-05 09:57:37.586198163 +0000 UTC m=+0.085219225 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:57:37 localhost podman[317036]: 2025-10-05 09:57:37.667299763 +0000 UTC m=+0.166320785 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 05:57:37 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:57:38 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:38 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:38 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:38 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:38 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:39 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:39 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:41 localhost ceph-mon[316511]: Reconfiguring crash.np0005471150 (monmap changed)... Oct 5 05:57:41 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471150.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:41 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471150 on np0005471150.localdomain Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: Reconfiguring osd.1 (monmap changed)... Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Oct 5 05:57:42 localhost ceph-mon[316511]: Reconfiguring daemon osd.1 on np0005471150.localdomain Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:42 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[316511]: Reconfig service osd.default_drive_group Oct 5 05:57:43 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:43 localhost ceph-mon[316511]: Reconfiguring osd.4 (monmap changed)... Oct 5 05:57:43 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Oct 5 05:57:43 localhost ceph-mon[316511]: Reconfiguring daemon osd.4 on np0005471150.localdomain Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: Reconfiguring mds.mds.np0005471150.bsiqok (monmap changed)... Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471150.bsiqok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:44 localhost ceph-mon[316511]: Reconfiguring daemon mds.mds.np0005471150.bsiqok on np0005471150.localdomain Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:44 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471150.zwqxye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:45 localhost ceph-mon[316511]: Reconfiguring mgr.np0005471150.zwqxye (monmap changed)... Oct 5 05:57:45 localhost ceph-mon[316511]: Reconfiguring daemon mgr.np0005471150.zwqxye on np0005471150.localdomain Oct 5 05:57:45 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:45 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:45 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471151.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:46 localhost openstack_network_exporter[250246]: ERROR 09:57:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:46 localhost openstack_network_exporter[250246]: ERROR 09:57:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:57:46 localhost openstack_network_exporter[250246]: ERROR 09:57:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:57:46 localhost openstack_network_exporter[250246]: ERROR 09:57:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:57:46 localhost openstack_network_exporter[250246]: Oct 5 05:57:46 localhost openstack_network_exporter[250246]: ERROR 09:57:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:57:46 localhost openstack_network_exporter[250246]: Oct 5 05:57:46 localhost ceph-mon[316511]: Reconfiguring crash.np0005471151 (monmap changed)... Oct 5 05:57:46 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471151 on np0005471151.localdomain Oct 5 05:57:46 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' Oct 5 05:57:46 localhost ceph-mon[316511]: from='mgr.26993 172.18.0.106:0/1541797612' entity='mgr.np0005471150.zwqxye' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Oct 5 05:57:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Oct 5 05:57:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 e86: 6 total, 6 up, 6 in Oct 5 05:57:47 localhost systemd[1]: session-71.scope: Deactivated successfully. Oct 5 05:57:47 localhost systemd[1]: session-71.scope: Consumed 23.606s CPU time. Oct 5 05:57:47 localhost systemd-logind[760]: Session 71 logged out. Waiting for processes to exit. Oct 5 05:57:47 localhost systemd-logind[760]: Removed session 71. Oct 5 05:57:47 localhost sshd[317503]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:57:47 localhost systemd-logind[760]: New session 74 of user ceph-admin. Oct 5 05:57:47 localhost systemd[1]: Started Session 74 of User ceph-admin. Oct 5 05:57:47 localhost ceph-mon[316511]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:47 localhost ceph-mon[316511]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:47 localhost ceph-mon[316511]: from='client.? 172.18.0.200:0/821573250' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: Activating manager daemon np0005471151.jecxod Oct 5 05:57:47 localhost ceph-mon[316511]: from='client.? 172.18.0.200:0/821573250' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:57:47 localhost ceph-mon[316511]: Manager daemon np0005471151.jecxod is now available Oct 5 05:57:47 localhost ceph-mon[316511]: removing stray HostCache host record np0005471148.localdomain.devices.0 Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"}]': finished Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005471148.localdomain.devices.0"}]': finished Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/mirror_snapshot_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:57:47 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471151.jecxod/trash_purge_schedule"} : dispatch Oct 5 05:57:48 localhost podman[317614]: 2025-10-05 09:57:48.494187888 +0000 UTC m=+0.074343773 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_BRANCH=main, distribution-scope=public, release=553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=) Oct 5 05:57:48 localhost podman[317614]: 2025-10-05 09:57:48.623206123 +0000 UTC m=+0.203362048 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, RELEASE=main, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 05:57:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1019841793 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:50 localhost ceph-mon[316511]: [05/Oct/2025:09:57:48] ENGINE Bus STARTING Oct 5 05:57:50 localhost ceph-mon[316511]: [05/Oct/2025:09:57:48] ENGINE Serving on https://172.18.0.107:7150 Oct 5 05:57:50 localhost ceph-mon[316511]: [05/Oct/2025:09:57:48] ENGINE Client ('172.18.0.107', 58322) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:57:50 localhost ceph-mon[316511]: [05/Oct/2025:09:57:48] ENGINE Serving on http://172.18.0.107:8765 Oct 5 05:57:50 localhost ceph-mon[316511]: [05/Oct/2025:09:57:48] ENGINE Bus STARTED Oct 5 05:57:50 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:57:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:57:50 localhost systemd[1]: tmp-crun.sKZBBC.mount: Deactivated successfully. Oct 5 05:57:50 localhost podman[317821]: 2025-10-05 09:57:50.387859677 +0000 UTC m=+0.082504330 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:57:50 localhost systemd[1]: tmp-crun.pMlvzT.mount: Deactivated successfully. Oct 5 05:57:50 localhost podman[317820]: 2025-10-05 09:57:50.405780336 +0000 UTC m=+0.099621536 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 05:57:50 localhost podman[317820]: 2025-10-05 09:57:50.416657519 +0000 UTC m=+0.110498709 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:57:50 localhost podman[317821]: 2025-10-05 09:57:50.432137391 +0000 UTC m=+0.126782044 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:57:50 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:57:50 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:57:51 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:57:52 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:57:52 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:57:52 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:57:52 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:57:52 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:57:52 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:57:52 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:52 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:52 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:57:52 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:57:53 localhost systemd[1]: tmp-crun.TdzeeU.mount: Deactivated successfully. Oct 5 05:57:53 localhost podman[318416]: 2025-10-05 09:57:53.124135992 +0000 UTC m=+0.065631990 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Oct 5 05:57:53 localhost podman[318416]: 2025-10-05 09:57:53.136317841 +0000 UTC m=+0.077813869 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:57:53 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:57:53 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:53 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:57:53 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:57:53 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:57:53 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:57:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020051180 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:54 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:57:54 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:57:54 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:54 localhost ceph-mon[316511]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Oct 5 05:57:54 localhost ceph-mon[316511]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Oct 5 05:57:54 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Oct 5 05:57:55 localhost ceph-mon[316511]: Reconfiguring osd.2 (monmap changed)... Oct 5 05:57:55 localhost ceph-mon[316511]: Reconfiguring daemon osd.2 on np0005471151.localdomain Oct 5 05:57:55 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:55 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Oct 5 05:57:56 localhost podman[248157]: time="2025-10-05T09:57:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:57:56 localhost podman[248157]: @ - - [05/Oct/2025:09:57:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:57:56 localhost podman[248157]: @ - - [05/Oct/2025:09:57:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19298 "" "Go-http-client/1.1" Oct 5 05:57:56 localhost ceph-mon[316511]: Reconfiguring osd.5 (monmap changed)... Oct 5 05:57:56 localhost ceph-mon[316511]: Reconfiguring daemon osd.5 on np0005471151.localdomain Oct 5 05:57:56 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:56 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:56 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005471151.uyxcpj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Oct 5 05:57:57 localhost ceph-mon[316511]: Reconfiguring mds.mds.np0005471151.uyxcpj (monmap changed)... Oct 5 05:57:57 localhost ceph-mon[316511]: Reconfiguring daemon mds.mds.np0005471151.uyxcpj on np0005471151.localdomain Oct 5 05:57:57 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:57 localhost ceph-mon[316511]: Reconfiguring mgr.np0005471151.jecxod (monmap changed)... Oct 5 05:57:57 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:57 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005471151.jecxod", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Oct 5 05:57:57 localhost ceph-mon[316511]: Reconfiguring daemon mgr.np0005471151.jecxod on np0005471151.localdomain Oct 5 05:57:57 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost podman[318632]: Oct 5 05:57:58 localhost podman[318632]: 2025-10-05 09:57:58.404835667 +0000 UTC m=+0.062352719 container create 6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_robinson, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=553, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=) Oct 5 05:57:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:57:58 localhost systemd[1]: Started libpod-conmon-6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162.scope. Oct 5 05:57:58 localhost systemd[1]: Started libcrun container. Oct 5 05:57:58 localhost podman[318632]: 2025-10-05 09:57:58.469973021 +0000 UTC m=+0.127490003 container init 6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_robinson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., version=7, release=553, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Oct 5 05:57:58 localhost podman[318632]: 2025-10-05 09:57:58.377286239 +0000 UTC m=+0.034803231 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:58 localhost podman[318632]: 2025-10-05 09:57:58.495226755 +0000 UTC m=+0.152743737 container start 6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_robinson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:57:58 localhost podman[318632]: 2025-10-05 09:57:58.495474881 +0000 UTC m=+0.152991913 container attach 6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_robinson, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, name=rhceph, version=7, distribution-scope=public, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Oct 5 05:57:58 localhost stoic_robinson[318654]: 167 167 Oct 5 05:57:58 localhost systemd[1]: libpod-6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162.scope: Deactivated successfully. Oct 5 05:57:58 localhost podman[318632]: 2025-10-05 09:57:58.552464849 +0000 UTC m=+0.209981881 container died 6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_robinson, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph) Oct 5 05:57:58 localhost podman[318664]: 2025-10-05 09:57:58.588422121 +0000 UTC m=+0.082912630 container remove 6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_robinson, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7) Oct 5 05:57:58 localhost systemd[1]: libpod-conmon-6acb9cb7fcd8bb82970179dd542f65bf08f862b83b9ca65a8919bb9b8f068162.scope: Deactivated successfully. Oct 5 05:57:58 localhost podman[318647]: 2025-10-05 09:57:58.543092518 +0000 UTC m=+0.104872502 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:57:58 localhost podman[318647]: 2025-10-05 09:57:58.626248255 +0000 UTC m=+0.188028209 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:57:58 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[316511]: Reconfiguring crash.np0005471152 (monmap changed)... Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005471152.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Oct 5 05:57:58 localhost ceph-mon[316511]: Reconfiguring daemon crash.np0005471152 on np0005471152.localdomain Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:58 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Oct 5 05:57:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054672 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:57:59 localhost podman[318739]: Oct 5 05:57:59 localhost podman[318739]: 2025-10-05 09:57:59.258353146 +0000 UTC m=+0.076129092 container create 2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_lederberg, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True) Oct 5 05:57:59 localhost systemd[1]: Started libpod-conmon-2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248.scope. Oct 5 05:57:59 localhost systemd[1]: Started libcrun container. Oct 5 05:57:59 localhost podman[318739]: 2025-10-05 09:57:59.321415163 +0000 UTC m=+0.139191109 container init 2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_lederberg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 05:57:59 localhost podman[318739]: 2025-10-05 09:57:59.228544256 +0000 UTC m=+0.046320232 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:57:59 localhost podman[318739]: 2025-10-05 09:57:59.331188885 +0000 UTC m=+0.148964871 container start 2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_lederberg, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12) Oct 5 05:57:59 localhost podman[318739]: 2025-10-05 09:57:59.331538995 +0000 UTC m=+0.149314951 container attach 2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_lederberg, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.) Oct 5 05:57:59 localhost laughing_lederberg[318754]: 167 167 Oct 5 05:57:59 localhost systemd[1]: libpod-2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248.scope: Deactivated successfully. Oct 5 05:57:59 localhost podman[318739]: 2025-10-05 09:57:59.333907751 +0000 UTC m=+0.151683737 container died 2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_lederberg, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, version=7, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12) Oct 5 05:57:59 localhost systemd[1]: tmp-crun.XZKS1N.mount: Deactivated successfully. Oct 5 05:57:59 localhost systemd[1]: var-lib-containers-storage-overlay-3ab49dced8c794336369fccd37f8ce4204eecba8a5fdc655b95622acc8614d44-merged.mount: Deactivated successfully. Oct 5 05:57:59 localhost systemd[1]: var-lib-containers-storage-overlay-b2c3c38ee20c26f8165c333a6d6ab43fc3a3ab7f5d349e20dbc43cc518b4ae55-merged.mount: Deactivated successfully. Oct 5 05:57:59 localhost podman[318760]: 2025-10-05 09:57:59.44084243 +0000 UTC m=+0.093928338 container remove 2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_lederberg, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-type=git) Oct 5 05:57:59 localhost systemd[1]: libpod-conmon-2938fb22865cf22886475b874cd999e8bd64e3c980cf74c47108a13685949248.scope: Deactivated successfully. Oct 5 05:57:59 localhost ceph-mon[316511]: Reconfiguring osd.0 (monmap changed)... Oct 5 05:57:59 localhost ceph-mon[316511]: Reconfiguring daemon osd.0 on np0005471152.localdomain Oct 5 05:57:59 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:57:59 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Oct 5 05:58:00 localhost podman[318838]: Oct 5 05:58:00 localhost podman[318838]: 2025-10-05 09:58:00.364306798 +0000 UTC m=+0.074477846 container create 5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_northcutt, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Oct 5 05:58:00 localhost systemd[1]: Started libpod-conmon-5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0.scope. Oct 5 05:58:00 localhost systemd[1]: Started libcrun container. Oct 5 05:58:00 localhost podman[318838]: 2025-10-05 09:58:00.424287389 +0000 UTC m=+0.134458467 container init 5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_northcutt, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Oct 5 05:58:00 localhost podman[318838]: 2025-10-05 09:58:00.433778234 +0000 UTC m=+0.143949312 container start 5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_northcutt, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55) Oct 5 05:58:00 localhost podman[318838]: 2025-10-05 09:58:00.434020501 +0000 UTC m=+0.144191579 container attach 5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_northcutt, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc.) Oct 5 05:58:00 localhost podman[318838]: 2025-10-05 09:58:00.336092152 +0000 UTC m=+0.046263260 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:58:00 localhost nostalgic_northcutt[318853]: 167 167 Oct 5 05:58:00 localhost systemd[1]: libpod-5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0.scope: Deactivated successfully. Oct 5 05:58:00 localhost podman[318838]: 2025-10-05 09:58:00.437164348 +0000 UTC m=+0.147335426 container died 5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_northcutt, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, CEPH_POINT_RELEASE=, version=7, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, distribution-scope=public, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Oct 5 05:58:00 localhost systemd[1]: var-lib-containers-storage-overlay-4dc1d52e057f94480c765cb8b6d631ffce59a867f3c21894c9d8d7dff1b3370a-merged.mount: Deactivated successfully. Oct 5 05:58:00 localhost podman[318858]: 2025-10-05 09:58:00.531316402 +0000 UTC m=+0.084794245 container remove 5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_northcutt, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Oct 5 05:58:00 localhost systemd[1]: libpod-conmon-5bcf887292711fea7950f6ac6aad51dd7a9553cf544ddf04a2d60fc3e38707e0.scope: Deactivated successfully. Oct 5 05:58:00 localhost ceph-mon[316511]: Reconfiguring osd.3 (monmap changed)... Oct 5 05:58:00 localhost ceph-mon[316511]: Reconfiguring daemon osd.3 on np0005471152.localdomain Oct 5 05:58:00 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:00 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:58:00 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[316511]: Saving service mon spec with placement label:mon Oct 5 05:58:01 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:58:01 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:01 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:58:03 localhost ceph-mon[316511]: Reconfiguring mon.np0005471150 (monmap changed)... Oct 5 05:58:03 localhost ceph-mon[316511]: Reconfiguring daemon mon.np0005471150 on np0005471150.localdomain Oct 5 05:58:03 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:03 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:58:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:58:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:58:03 localhost systemd[1]: tmp-crun.llUZd2.mount: Deactivated successfully. Oct 5 05:58:03 localhost podman[318934]: 2025-10-05 09:58:03.785548825 +0000 UTC m=+0.101049396 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 05:58:03 localhost systemd[1]: tmp-crun.mS1u9U.mount: Deactivated successfully. Oct 5 05:58:03 localhost podman[318935]: 2025-10-05 09:58:03.836916497 +0000 UTC m=+0.150032071 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 05:58:03 localhost podman[318935]: 2025-10-05 09:58:03.844974062 +0000 UTC m=+0.158089626 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:58:03 localhost podman[318934]: 2025-10-05 09:58:03.854818515 +0000 UTC m=+0.170319066 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:58:03 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:58:03 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:58:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:04 localhost podman[319007]: Oct 5 05:58:04 localhost podman[319007]: 2025-10-05 09:58:04.220485724 +0000 UTC m=+0.079185828 container create c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_fermat, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=553, ceph=True) Oct 5 05:58:04 localhost ceph-mon[316511]: Reconfiguring mon.np0005471151 (monmap changed)... Oct 5 05:58:04 localhost ceph-mon[316511]: Reconfiguring daemon mon.np0005471151 on np0005471151.localdomain Oct 5 05:58:04 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:04 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:04 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Oct 5 05:58:04 localhost systemd[1]: Started libpod-conmon-c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352.scope. Oct 5 05:58:04 localhost systemd[1]: Started libcrun container. Oct 5 05:58:04 localhost podman[319007]: 2025-10-05 09:58:04.18769549 +0000 UTC m=+0.046395584 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 05:58:04 localhost podman[319007]: 2025-10-05 09:58:04.289272769 +0000 UTC m=+0.147972863 container init c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_fermat, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph) Oct 5 05:58:04 localhost podman[319007]: 2025-10-05 09:58:04.298188208 +0000 UTC m=+0.156888332 container start c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_fermat, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55) Oct 5 05:58:04 localhost podman[319007]: 2025-10-05 09:58:04.298655021 +0000 UTC m=+0.157355115 container attach c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_fermat, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_CLEAN=True, name=rhceph, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 05:58:04 localhost systemd[1]: libpod-c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352.scope: Deactivated successfully. Oct 5 05:58:04 localhost gallant_fermat[319023]: 167 167 Oct 5 05:58:04 localhost nova_compute[297130]: 2025-10-05 09:58:04.301 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:04 localhost podman[319007]: 2025-10-05 09:58:04.303219568 +0000 UTC m=+0.161919682 container died c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_fermat, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Oct 5 05:58:04 localhost podman[319028]: 2025-10-05 09:58:04.394131692 +0000 UTC m=+0.080295949 container remove c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_fermat, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, ceph=True, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12) Oct 5 05:58:04 localhost systemd[1]: libpod-conmon-c90c031059f6037bd5e28f9a996429526b3a093d09dfbaaf28a87c50480b6352.scope: Deactivated successfully. Oct 5 05:58:05 localhost ceph-mon[316511]: Reconfiguring mon.np0005471152 (monmap changed)... Oct 5 05:58:05 localhost ceph-mon[316511]: Reconfiguring daemon mon.np0005471152 on np0005471152.localdomain Oct 5 05:58:05 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:05 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:58:07 localhost nova_compute[297130]: 2025-10-05 09:58:07.270 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:07 localhost nova_compute[297130]: 2025-10-05 09:58:07.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:07 localhost nova_compute[297130]: 2025-10-05 09:58:07.271 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:58:07 localhost nova_compute[297130]: 2025-10-05 09:58:07.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:58:07 localhost nova_compute[297130]: 2025-10-05 09:58:07.296 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:58:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:58:07 localhost podman[319044]: 2025-10-05 09:58:07.915615063 +0000 UTC m=+0.082749507 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 05:58:07 localhost podman[319044]: 2025-10-05 09:58:07.953275482 +0000 UTC m=+0.120409926 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:58:07 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:58:07 localhost podman[319045]: 2025-10-05 09:58:07.974194795 +0000 UTC m=+0.140962059 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 05:58:08 localhost podman[319045]: 2025-10-05 09:58:08.042249151 +0000 UTC m=+0.209016425 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Oct 5 05:58:08 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:58:08 localhost systemd[1]: session-72.scope: Deactivated successfully. Oct 5 05:58:08 localhost systemd[1]: session-72.scope: Consumed 1.738s CPU time. Oct 5 05:58:08 localhost systemd-logind[760]: Session 72 logged out. Waiting for processes to exit. Oct 5 05:58:08 localhost systemd-logind[760]: Removed session 72. Oct 5 05:58:08 localhost nova_compute[297130]: 2025-10-05 09:58:08.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:08 localhost nova_compute[297130]: 2025-10-05 09:58:08.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.294 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.294 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.294 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:58:10 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:58:10 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/801126901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.741 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.912 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.914 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11970MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.915 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.915 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.961 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.961 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:58:10 localhost nova_compute[297130]: 2025-10-05 09:58:10.977 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:58:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:58:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3488957969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:58:11 localhost nova_compute[297130]: 2025-10-05 09:58:11.450 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:58:11 localhost nova_compute[297130]: 2025-10-05 09:58:11.456 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:58:11 localhost nova_compute[297130]: 2025-10-05 09:58:11.475 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:58:11 localhost nova_compute[297130]: 2025-10-05 09:58:11.478 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:58:11 localhost nova_compute[297130]: 2025-10-05 09:58:11.478 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:58:12 localhost nova_compute[297130]: 2025-10-05 09:58:12.479 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:12 localhost nova_compute[297130]: 2025-10-05 09:58:12.480 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:58:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:16 localhost openstack_network_exporter[250246]: ERROR 09:58:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:58:16 localhost openstack_network_exporter[250246]: ERROR 09:58:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:16 localhost openstack_network_exporter[250246]: ERROR 09:58:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:16 localhost openstack_network_exporter[250246]: ERROR 09:58:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:58:16 localhost openstack_network_exporter[250246]: Oct 5 05:58:16 localhost openstack_network_exporter[250246]: ERROR 09:58:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:58:16 localhost openstack_network_exporter[250246]: Oct 5 05:58:18 localhost systemd[1]: Stopping User Manager for UID 1003... Oct 5 05:58:18 localhost systemd[314781]: Activating special unit Exit the Session... Oct 5 05:58:18 localhost systemd[314781]: Stopped target Main User Target. Oct 5 05:58:18 localhost systemd[314781]: Stopped target Basic System. Oct 5 05:58:18 localhost systemd[314781]: Stopped target Paths. Oct 5 05:58:18 localhost systemd[314781]: Stopped target Sockets. Oct 5 05:58:18 localhost systemd[314781]: Stopped target Timers. Oct 5 05:58:18 localhost systemd[314781]: Stopped Mark boot as successful after the user session has run 2 minutes. Oct 5 05:58:18 localhost systemd[314781]: Stopped Daily Cleanup of User's Temporary Directories. Oct 5 05:58:18 localhost systemd[314781]: Closed D-Bus User Message Bus Socket. Oct 5 05:58:18 localhost systemd[314781]: Stopped Create User's Volatile Files and Directories. Oct 5 05:58:18 localhost systemd[314781]: Removed slice User Application Slice. Oct 5 05:58:18 localhost systemd[314781]: Reached target Shutdown. Oct 5 05:58:18 localhost systemd[314781]: Finished Exit the Session. Oct 5 05:58:18 localhost systemd[314781]: Reached target Exit the Session. Oct 5 05:58:18 localhost systemd[1]: user@1003.service: Deactivated successfully. Oct 5 05:58:18 localhost systemd[1]: Stopped User Manager for UID 1003. Oct 5 05:58:18 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Oct 5 05:58:18 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Oct 5 05:58:18 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Oct 5 05:58:18 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Oct 5 05:58:18 localhost systemd[1]: Removed slice User Slice of UID 1003. Oct 5 05:58:18 localhost systemd[1]: user-1003.slice: Consumed 2.315s CPU time. Oct 5 05:58:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 05:58:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4011923649' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 05:58:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 05:58:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4011923649' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 05:58:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:58:20.396 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:58:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:58:20.396 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:58:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:58:20.397 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:58:20 localhost podman[319134]: 2025-10-05 09:58:20.920470287 +0000 UTC m=+0.080828533 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:58:20 localhost podman[319134]: 2025-10-05 09:58:20.929535879 +0000 UTC m=+0.089894105 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:58:20 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:58:20 localhost podman[319133]: 2025-10-05 09:58:20.978115252 +0000 UTC m=+0.142008537 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:58:21 localhost podman[319133]: 2025-10-05 09:58:21.017315914 +0000 UTC m=+0.181209229 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 05:58:21 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:58:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:23 localhost podman[319176]: 2025-10-05 09:58:23.910889521 +0000 UTC m=+0.074989569 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 05:58:23 localhost podman[319176]: 2025-10-05 09:58:23.929268284 +0000 UTC m=+0.093368372 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41) Oct 5 05:58:23 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:58:26 localhost podman[248157]: time="2025-10-05T09:58:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:58:26 localhost podman[248157]: @ - - [05/Oct/2025:09:58:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:58:26 localhost podman[248157]: @ - - [05/Oct/2025:09:58:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19312 "" "Go-http-client/1.1" Oct 5 05:58:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:58:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:28 localhost systemd[1]: tmp-crun.HJU4Ah.mount: Deactivated successfully. Oct 5 05:58:28 localhost podman[319195]: 2025-10-05 09:58:28.926752198 +0000 UTC m=+0.090942606 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:58:28 localhost podman[319195]: 2025-10-05 09:58:28.930988575 +0000 UTC m=+0.095179003 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Oct 5 05:58:28 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:58:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:58:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:58:34 localhost podman[319213]: 2025-10-05 09:58:34.909488032 +0000 UTC m=+0.080606117 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2) Oct 5 05:58:34 localhost podman[319213]: 2025-10-05 09:58:34.951201553 +0000 UTC m=+0.122319598 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 05:58:34 localhost systemd[1]: tmp-crun.Mosg2f.mount: Deactivated successfully. Oct 5 05:58:34 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:58:34 localhost podman[319214]: 2025-10-05 09:58:34.974480633 +0000 UTC m=+0.142543513 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:58:34 localhost podman[319214]: 2025-10-05 09:58:34.987159025 +0000 UTC m=+0.155221905 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:58:34 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:58:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:58:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.881 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 09:58:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 05:58:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:38 localhost podman[319257]: 2025-10-05 09:58:38.914877384 +0000 UTC m=+0.082774576 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:58:38 localhost podman[319257]: 2025-10-05 09:58:38.927293441 +0000 UTC m=+0.095190623 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 05:58:38 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:58:38 localhost podman[319258]: 2025-10-05 09:58:38.973459737 +0000 UTC m=+0.138413177 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 05:58:39 localhost podman[319258]: 2025-10-05 09:58:39.043118277 +0000 UTC m=+0.208071717 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller) Oct 5 05:58:39 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:58:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:46 localhost openstack_network_exporter[250246]: ERROR 09:58:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:46 localhost openstack_network_exporter[250246]: ERROR 09:58:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:58:46 localhost openstack_network_exporter[250246]: ERROR 09:58:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:58:46 localhost openstack_network_exporter[250246]: ERROR 09:58:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:58:46 localhost openstack_network_exporter[250246]: Oct 5 05:58:46 localhost openstack_network_exporter[250246]: ERROR 09:58:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:58:46 localhost openstack_network_exporter[250246]: Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.589279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328589335, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2228, "num_deletes": 255, "total_data_size": 6581619, "memory_usage": 7027504, "flush_reason": "Manual Compaction"} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328609946, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3985899, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12072, "largest_seqno": 14299, "table_properties": {"data_size": 3976694, "index_size": 5518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22751, "raw_average_key_size": 22, "raw_value_size": 3956892, "raw_average_value_size": 3852, "num_data_blocks": 241, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658256, "oldest_key_time": 1759658256, "file_creation_time": 1759658328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 20717 microseconds, and 8695 cpu microseconds. Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.609997) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3985899 bytes OK Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.610025) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.612116) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.612140) EVENT_LOG_v1 {"time_micros": 1759658328612134, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.612162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 6570880, prev total WAL file size 6570880, number of live WAL files 2. Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.613606) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3892KB)], [15(15MB)] Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328613656, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 19767532, "oldest_snapshot_seqno": -1} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11998 keys, 16779825 bytes, temperature: kUnknown Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328714994, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 16779825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16712062, "index_size": 36686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30021, "raw_key_size": 322511, "raw_average_key_size": 26, "raw_value_size": 16508437, "raw_average_value_size": 1375, "num_data_blocks": 1394, "num_entries": 11998, "num_filter_entries": 11998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.715249) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 16779825 bytes Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.717001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 165.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 15.1 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(9.2) write-amplify(4.2) OK, records in: 12544, records dropped: 546 output_compression: NoCompression Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.717030) EVENT_LOG_v1 {"time_micros": 1759658328717017, "job": 6, "event": "compaction_finished", "compaction_time_micros": 101412, "compaction_time_cpu_micros": 44066, "output_level": 6, "num_output_files": 1, "total_output_size": 16779825, "num_input_records": 12544, "num_output_records": 11998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328717746, "job": 6, "event": "table_file_deletion", "file_number": 17} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658328719905, "job": 6, "event": "table_file_deletion", "file_number": 15} Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.613492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.719984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.719990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.719993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.719996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:58:48.720000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:58:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:58:51 localhost podman[319303]: 2025-10-05 09:58:51.912684055 +0000 UTC m=+0.080859054 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:58:51 localhost podman[319303]: 2025-10-05 09:58:51.925571364 +0000 UTC m=+0.093746353 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 05:58:51 localhost podman[319302]: 2025-10-05 09:58:51.968662294 +0000 UTC m=+0.138489759 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:58:51 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:58:52 localhost podman[319302]: 2025-10-05 09:58:52.033502721 +0000 UTC m=+0.203330186 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 05:58:52 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:58:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) Oct 5 05:58:54 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1784405210' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch Oct 5 05:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:58:54 localhost podman[319343]: 2025-10-05 09:58:54.904333554 +0000 UTC m=+0.077520711 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal) Oct 5 05:58:54 localhost podman[319343]: 2025-10-05 09:58:54.919427534 +0000 UTC m=+0.092614701 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Oct 5 05:58:54 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:58:56 localhost podman[248157]: time="2025-10-05T09:58:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:58:56 localhost podman[248157]: @ - - [05/Oct/2025:09:58:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:58:56 localhost podman[248157]: @ - - [05/Oct/2025:09:58:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19294 "" "Go-http-client/1.1" Oct 5 05:58:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:58:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:58:59 localhost systemd[1]: tmp-crun.BZP79k.mount: Deactivated successfully. Oct 5 05:58:59 localhost podman[319365]: 2025-10-05 09:58:59.907739032 +0000 UTC m=+0.075103934 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 05:58:59 localhost podman[319365]: 2025-10-05 09:58:59.942212762 +0000 UTC m=+0.109577644 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 05:58:59 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:59:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:05 localhost nova_compute[297130]: 2025-10-05 09:59:05.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:59:05 localhost systemd[1]: tmp-crun.WOAVfU.mount: Deactivated successfully. Oct 5 05:59:05 localhost podman[319469]: 2025-10-05 09:59:05.927529318 +0000 UTC m=+0.097687933 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 05:59:05 localhost podman[319469]: 2025-10-05 09:59:05.97034076 +0000 UTC m=+0.140499375 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 05:59:05 localhost systemd[1]: tmp-crun.hqMCcK.mount: Deactivated successfully. Oct 5 05:59:05 localhost podman[319470]: 2025-10-05 09:59:05.983369144 +0000 UTC m=+0.151299187 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:59:05 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:59:05 localhost podman[319470]: 2025-10-05 09:59:05.997097906 +0000 UTC m=+0.165027929 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 05:59:06 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:59:06 localhost ceph-mon[316511]: from='mgr.34248 172.18.0.107:0/1912291206' entity='mgr.np0005471151.jecxod' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:06 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:59:07 localhost nova_compute[297130]: 2025-10-05 09:59:07.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:07 localhost nova_compute[297130]: 2025-10-05 09:59:07.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 05:59:07 localhost nova_compute[297130]: 2025-10-05 09:59:07.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 05:59:07 localhost nova_compute[297130]: 2025-10-05 09:59:07.295 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 05:59:07 localhost ceph-mon[316511]: from='mgr.34248 ' entity='mgr.np0005471151.jecxod' Oct 5 05:59:08 localhost nova_compute[297130]: 2025-10-05 09:59:08.290 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:08 localhost nova_compute[297130]: 2025-10-05 09:59:08.291 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:09 localhost nova_compute[297130]: 2025-10-05 09:59:09.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:59:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:59:09 localhost podman[319511]: 2025-10-05 09:59:09.911060742 +0000 UTC m=+0.079053394 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 05:59:10 localhost systemd[1]: tmp-crun.byavji.mount: Deactivated successfully. Oct 5 05:59:10 localhost podman[319510]: 2025-10-05 09:59:10.004513135 +0000 UTC m=+0.175234793 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 05:59:10 localhost podman[319510]: 2025-10-05 09:59:10.016055886 +0000 UTC m=+0.186777564 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, tcib_managed=true) Oct 5 05:59:10 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:59:10 localhost podman[319511]: 2025-10-05 09:59:10.033167213 +0000 UTC m=+0.201159855 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 05:59:10 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.294 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.294 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:59:10 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:59:10 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/160494923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.713 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.919 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.921 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11978MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.921 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.922 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.994 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 05:59:10 localhost nova_compute[297130]: 2025-10-05 09:59:10.995 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 05:59:11 localhost nova_compute[297130]: 2025-10-05 09:59:11.018 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/4285775013' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 e87: 6 total, 6 up, 6 in Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr handle_mgr_map Activating! Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr handle_mgr_map I am now activating Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1121963598' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471150"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mon metadata", "id": "np0005471150"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471151"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mon metadata", "id": "np0005471151"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005471152"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mon metadata", "id": "np0005471152"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471151.uyxcpj"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mds metadata", "who": "mds.np0005471151.uyxcpj"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).mds e16 all = 0 Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471150.bsiqok"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mds metadata", "who": "mds.np0005471150.bsiqok"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).mds e16 all = 0 Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005471152.pozuqw"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mds metadata", "who": "mds.np0005471152.pozuqw"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).mds e16 all = 0 Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471152.kbhlus", "id": "np0005471152.kbhlus"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mgr metadata", "who": "np0005471152.kbhlus", "id": "np0005471152.kbhlus"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471150.zwqxye", "id": "np0005471150.zwqxye"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mgr metadata", "who": "np0005471150.zwqxye", "id": "np0005471150.zwqxye"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata", "id": 0} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata", "id": 1} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata", "id": 2} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata", "id": 3} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata", "id": 4} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata", "id": 5} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mds metadata"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mds metadata"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).mds e16 all = 1 Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd metadata"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon metadata"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mon metadata"} : dispatch Oct 5 05:59:11 localhost nova_compute[297130]: 2025-10-05 09:59:11.478 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 05:59:11 localhost nova_compute[297130]: 2025-10-05 09:59:11.485 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 05:59:11 localhost ceph-mgr[301363]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: balancer Oct 5 05:59:11 localhost ceph-mgr[301363]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: [balancer INFO root] Starting Oct 5 05:59:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_09:59:11 Oct 5 05:59:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 05:59:11 localhost ceph-mgr[301363]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Oct 5 05:59:11 localhost nova_compute[297130]: 2025-10-05 09:59:11.508 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 05:59:11 localhost nova_compute[297130]: 2025-10-05 09:59:11.510 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 05:59:11 localhost nova_compute[297130]: 2025-10-05 09:59:11.511 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:59:11 localhost systemd[1]: session-74.scope: Deactivated successfully. Oct 5 05:59:11 localhost systemd[1]: session-74.scope: Consumed 10.209s CPU time. Oct 5 05:59:11 localhost systemd-logind[760]: Session 74 logged out. Waiting for processes to exit. Oct 5 05:59:11 localhost systemd-logind[760]: Removed session 74. Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: cephadm Oct 5 05:59:11 localhost ceph-mgr[301363]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: crash Oct 5 05:59:11 localhost ceph-mgr[301363]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: devicehealth Oct 5 05:59:11 localhost ceph-mgr[301363]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: iostat Oct 5 05:59:11 localhost ceph-mgr[301363]: [devicehealth INFO root] Starting Oct 5 05:59:11 localhost ceph-mgr[301363]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: nfs Oct 5 05:59:11 localhost ceph-mgr[301363]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: orchestrator Oct 5 05:59:11 localhost ceph-mgr[301363]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: pg_autoscaler Oct 5 05:59:11 localhost ceph-mgr[301363]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: progress Oct 5 05:59:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] recovery thread starting Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] starting setup Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: rbd_support Oct 5 05:59:11 localhost ceph-mgr[301363]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: restful Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} v 0) Oct 5 05:59:11 localhost ceph-mgr[301363]: [restful INFO root] server_addr: :: server_port: 8003 Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-mgr[301363]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: status Oct 5 05:59:11 localhost ceph-mgr[301363]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: telemetry Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [restful WARNING root] server not running: no certificate configured Oct 5 05:59:11 localhost ceph-mgr[301363]: [progress INFO root] Loading... Oct 5 05:59:11 localhost ceph-mgr[301363]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Oct 5 05:59:11 localhost ceph-mgr[301363]: [progress INFO root] Loaded OSDMap, ready. Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] PerfHandler: starting Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: vms, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: volumes, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: images, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_task_task: backups, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 05:59:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 05:59:11 localhost ceph-mgr[301363]: mgr load Constructed class from module: volumes Oct 5 05:59:11 localhost ceph-mon[316511]: from='client.? 172.18.0.200:0/4285775013' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: Activating manager daemon np0005471152.kbhlus Oct 5 05:59:11 localhost ceph-mon[316511]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TaskHandler: starting Oct 5 05:59:11 localhost ceph-mon[316511]: Manager daemon np0005471152.kbhlus is now available Oct 5 05:59:11 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/mirror_snapshot_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.643+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.643+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.643+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.643+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.643+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} v 0) Oct 5 05:59:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.655+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.655+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.655+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.655+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T09:59:11.655+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Oct 5 05:59:11 localhost ceph-mgr[301363]: [rbd_support INFO root] setup complete Oct 5 05:59:11 localhost sshd[319736]: main: sshd: ssh-rsa algorithm is disabled Oct 5 05:59:11 localhost systemd-logind[760]: New session 75 of user ceph-admin. Oct 5 05:59:11 localhost systemd[1]: Started Session 75 of User ceph-admin. Oct 5 05:59:12 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:12 localhost nova_compute[297130]: 2025-10-05 09:59:12.512 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:12 localhost nova_compute[297130]: 2025-10-05 09:59:12.514 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 05:59:12 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:59:12 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005471152.kbhlus/trash_purge_schedule"} : dispatch Oct 5 05:59:12 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:59:12] ENGINE Bus STARTING Oct 5 05:59:12 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:59:12] ENGINE Bus STARTING Oct 5 05:59:12 localhost podman[319847]: 2025-10-05 09:59:12.940254707 +0000 UTC m=+0.089065512 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, maintainer=Guillaume Abrioux , version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Oct 5 05:59:12 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:59:12] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:59:12 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:59:12] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:59:13 localhost podman[319847]: 2025-10-05 09:59:13.093154898 +0000 UTC m=+0.241965713 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Oct 5 05:59:13 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:59:13] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:59:13 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:59:13] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:59:13 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:59:13] ENGINE Bus STARTED Oct 5 05:59:13 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:59:13] ENGINE Bus STARTED Oct 5 05:59:13 localhost ceph-mgr[301363]: [cephadm INFO cherrypy.error] [05/Oct/2025:09:59:13] ENGINE Client ('172.18.0.108', 46836) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:59:13 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : [05/Oct/2025:09:59:13] ENGINE Client ('172.18.0.108', 46836) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:59:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:13 localhost ceph-mon[316511]: [05/Oct/2025:09:59:12] ENGINE Bus STARTING Oct 5 05:59:13 localhost ceph-mon[316511]: [05/Oct/2025:09:59:12] ENGINE Serving on http://172.18.0.108:8765 Oct 5 05:59:13 localhost ceph-mon[316511]: [05/Oct/2025:09:59:13] ENGINE Serving on https://172.18.0.108:7150 Oct 5 05:59:13 localhost ceph-mon[316511]: [05/Oct/2025:09:59:13] ENGINE Bus STARTED Oct 5 05:59:13 localhost ceph-mon[316511]: [05/Oct/2025:09:59:13] ENGINE Client ('172.18.0.108', 46836) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Oct 5 05:59:13 localhost ceph-mon[316511]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Oct 5 05:59:13 localhost ceph-mon[316511]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Oct 5 05:59:13 localhost ceph-mon[316511]: Cluster is now healthy Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:59:13 localhost ceph-mgr[301363]: [devicehealth INFO root] Check health Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:59:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:59:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:15 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 05:59:15 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 05:59:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:16 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.269355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356269672, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 803, "num_deletes": 256, "total_data_size": 2093860, "memory_usage": 2195608, "flush_reason": "Manual Compaction"} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356281734, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1356453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14304, "largest_seqno": 15102, "table_properties": {"data_size": 1352564, "index_size": 1616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9414, "raw_average_key_size": 19, "raw_value_size": 1344161, "raw_average_value_size": 2771, "num_data_blocks": 68, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658329, "oldest_key_time": 1759658329, "file_creation_time": 1759658356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12196 microseconds, and 5452 cpu microseconds. Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.281811) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1356453 bytes OK Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.281835) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.284125) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.284157) EVENT_LOG_v1 {"time_micros": 1759658356284146, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.284183) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2089411, prev total WAL file size 2089411, number of live WAL files 2. Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.285188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353137' seq:72057594037927935, type:22 .. '6B760031373733' seq:0, type:0; will stop at (end) Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1324KB)], [18(16MB)] Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356285233, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18136278, "oldest_snapshot_seqno": -1} Oct 5 05:59:16 localhost ceph-mgr[301363]: mgr.server handle_open ignoring open from mgr.np0005471151.jecxod 172.18.0.107:0/3938119053; not ready for session (expect reconnect) Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11941 keys, 16960962 bytes, temperature: kUnknown Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356385058, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 16960962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16894254, "index_size": 35759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29893, "raw_key_size": 323124, "raw_average_key_size": 27, "raw_value_size": 16692199, "raw_average_value_size": 1397, "num_data_blocks": 1338, "num_entries": 11941, "num_filter_entries": 11941, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.385339) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 16960962 bytes Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.387204) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 169.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 16.0 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(25.9) write-amplify(12.5) OK, records in: 12483, records dropped: 542 output_compression: NoCompression Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.387234) EVENT_LOG_v1 {"time_micros": 1759658356387221, "job": 8, "event": "compaction_finished", "compaction_time_micros": 99903, "compaction_time_cpu_micros": 45914, "output_level": 6, "num_output_files": 1, "total_output_size": 16960962, "num_input_records": 12483, "num_output_records": 11941, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356387517, "job": 8, "event": "table_file_deletion", "file_number": 20} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658356390030, "job": 8, "event": "table_file_deletion", "file_number": 18} Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.285086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.390087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.390095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.390099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.390102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-09:59:16.390105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 05:59:16 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:16 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:16 localhost openstack_network_exporter[250246]: ERROR 09:59:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:59:16 localhost openstack_network_exporter[250246]: ERROR 09:59:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:16 localhost openstack_network_exporter[250246]: ERROR 09:59:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:16 localhost openstack_network_exporter[250246]: ERROR 09:59:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:59:16 localhost openstack_network_exporter[250246]: Oct 5 05:59:16 localhost openstack_network_exporter[250246]: ERROR 09:59:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:59:16 localhost openstack_network_exporter[250246]: Oct 5 05:59:16 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:16 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:16 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 05:59:16 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:16 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 05:59:16 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 05:59:16 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:16 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:16 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/etc/ceph/ceph.conf Oct 5 05:59:16 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:16 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.conf Oct 5 05:59:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005471151.jecxod", "id": "np0005471151.jecxod"} v 0) Oct 5 05:59:17 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "mgr metadata", "who": "np0005471151.jecxod", "id": "np0005471151.jecxod"} : dispatch Oct 5 05:59:17 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mgr[301363]: [cephadm INFO cephadm.serve] Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:17 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:17 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/etc/ceph/ceph.client.admin.keyring Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:59:18 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 5d1aa93b-081d-48a8-8cd9-ffd26345b921 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:59:18 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 5d1aa93b-081d-48a8-8cd9-ffd26345b921 (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:59:18 localhost ceph-mgr[301363]: [progress INFO root] Completed event 5d1aa93b-081d-48a8-8cd9-ffd26345b921 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 05:59:18 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev f66140a1-650a-42da-acc5-04dfd23f2c4e (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:59:18 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev f66140a1-650a-42da-acc5-04dfd23f2c4e (Updating node-proxy deployment (+3 -> 3)) Oct 5 05:59:18 localhost ceph-mgr[301363]: [progress INFO root] Completed event f66140a1-650a-42da-acc5-04dfd23f2c4e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 05:59:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 05:59:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:19 localhost ceph-mon[316511]: Updating np0005471152.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:19 localhost ceph-mon[316511]: Updating np0005471150.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:19 localhost ceph-mon[316511]: Updating np0005471151.localdomain:/var/lib/ceph/659062ac-50b4-5607-b699-3105da7f55ee/config/ceph.client.admin.keyring Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 05:59:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s Oct 5 05:59:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:59:20.397 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 05:59:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:59:20.398 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 05:59:20 localhost ovn_metadata_agent[163196]: 2025-10-05 09:59:20.399 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 05:59:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Oct 5 05:59:21 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 05:59:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 05:59:22 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 05:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:59:22 localhost podman[320797]: 2025-10-05 09:59:22.907857301 +0000 UTC m=+0.075378911 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 05:59:22 localhost podman[320797]: 2025-10-05 09:59:22.921300116 +0000 UTC m=+0.088821716 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 05:59:22 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:59:23 localhost podman[320798]: 2025-10-05 09:59:23.017922899 +0000 UTC m=+0.181669523 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 05:59:23 localhost podman[320798]: 2025-10-05 09:59:23.031119866 +0000 UTC m=+0.194866470 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 05:59:23 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:59:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Oct 5 05:59:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:59:25 localhost podman[320839]: 2025-10-05 09:59:25.143454696 +0000 UTC m=+0.081762599 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.) Oct 5 05:59:25 localhost podman[320839]: 2025-10-05 09:59:25.156001696 +0000 UTC m=+0.094309639 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41) Oct 5 05:59:25 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:59:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 5 05:59:26 localhost podman[248157]: time="2025-10-05T09:59:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:59:26 localhost podman[248157]: @ - - [05/Oct/2025:09:59:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:59:26 localhost podman[248157]: @ - - [05/Oct/2025:09:59:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19309 "" "Go-http-client/1.1" Oct 5 05:59:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 5 05:59:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Oct 5 05:59:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 05:59:30 localhost podman[320858]: 2025-10-05 09:59:30.920361843 +0000 UTC m=+0.086357686 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 05:59:30 localhost podman[320858]: 2025-10-05 09:59:30.954197297 +0000 UTC m=+0.120193110 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 05:59:30 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 05:59:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 05:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 05:59:36 localhost podman[320877]: 2025-10-05 09:59:36.919136545 +0000 UTC m=+0.085277088 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:59:36 localhost podman[320878]: 2025-10-05 09:59:36.974110116 +0000 UTC m=+0.135728712 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 05:59:36 localhost podman[320877]: 2025-10-05 09:59:36.984590968 +0000 UTC m=+0.150731491 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS) Oct 5 05:59:36 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 05:59:37 localhost podman[320878]: 2025-10-05 09:59:37.011322333 +0000 UTC m=+0.172940889 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 05:59:37 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 05:59:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 05:59:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 05:59:40 localhost podman[320918]: 2025-10-05 09:59:40.920415143 +0000 UTC m=+0.087284923 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller) Oct 5 05:59:40 localhost podman[320917]: 2025-10-05 09:59:40.974924992 +0000 UTC m=+0.145233797 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid) Oct 5 05:59:40 localhost podman[320917]: 2025-10-05 09:59:40.986758762 +0000 UTC m=+0.157067557 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 05:59:40 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 05:59:41 localhost podman[320918]: 2025-10-05 09:59:41.037256438 +0000 UTC m=+0.204126228 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 05:59:41 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 05:59:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:59:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:59:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:59:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:59:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 05:59:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 05:59:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:46 localhost openstack_network_exporter[250246]: ERROR 09:59:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:46 localhost openstack_network_exporter[250246]: ERROR 09:59:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 05:59:46 localhost openstack_network_exporter[250246]: ERROR 09:59:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 05:59:46 localhost openstack_network_exporter[250246]: ERROR 09:59:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 05:59:46 localhost openstack_network_exporter[250246]: Oct 5 05:59:46 localhost openstack_network_exporter[250246]: ERROR 09:59:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 05:59:46 localhost openstack_network_exporter[250246]: Oct 5 05:59:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 05:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 05:59:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:53 localhost podman[320960]: 2025-10-05 09:59:53.911565427 +0000 UTC m=+0.081044709 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm) Oct 5 05:59:53 localhost podman[320960]: 2025-10-05 09:59:53.921168564 +0000 UTC m=+0.090647846 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 05:59:53 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 05:59:53 localhost systemd[1]: tmp-crun.v5OKSe.mount: Deactivated successfully. Oct 5 05:59:53 localhost podman[320961]: 2025-10-05 09:59:53.974686385 +0000 UTC m=+0.141001239 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 05:59:54 localhost podman[320961]: 2025-10-05 09:59:54.015330448 +0000 UTC m=+0.181645272 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 05:59:54 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 05:59:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 05:59:55 localhost podman[321001]: 2025-10-05 09:59:55.908981036 +0000 UTC m=+0.078550929 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git) Oct 5 05:59:55 localhost podman[321001]: 2025-10-05 09:59:55.926236027 +0000 UTC m=+0.095805950 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, architecture=x86_64) Oct 5 05:59:55 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 05:59:56 localhost podman[248157]: time="2025-10-05T09:59:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 05:59:56 localhost podman[248157]: @ - - [05/Oct/2025:09:59:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 05:59:56 localhost podman[248157]: @ - - [05/Oct/2025:09:59:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19311 "" "Go-http-client/1.1" Oct 5 05:59:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 05:59:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 05:59:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:00 localhost ceph-mon[316511]: overall HEALTH_OK Oct 5 06:00:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:00:01 localhost podman[321021]: 2025-10-05 10:00:01.906220041 +0000 UTC m=+0.076632336 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:00:01 localhost podman[321021]: 2025-10-05 10:00:01.942272356 +0000 UTC m=+0.112684631 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent) Oct 5 06:00:01 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:00:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:07 localhost nova_compute[297130]: 2025-10-05 10:00:07.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:00:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:00:07 localhost podman[321040]: 2025-10-05 10:00:07.913415867 +0000 UTC m=+0.084898647 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible) Oct 5 06:00:07 localhost podman[321040]: 2025-10-05 10:00:07.927223102 +0000 UTC m=+0.098705902 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:00:07 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:00:08 localhost podman[321041]: 2025-10-05 10:00:08.022915118 +0000 UTC m=+0.189955403 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:00:08 localhost podman[321041]: 2025-10-05 10:00:08.031306262 +0000 UTC m=+0.198346557 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:00:08 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:00:08 localhost nova_compute[297130]: 2025-10-05 10:00:08.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:08 localhost nova_compute[297130]: 2025-10-05 10:00:08.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:00:08 localhost nova_compute[297130]: 2025-10-05 10:00:08.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:00:08 localhost nova_compute[297130]: 2025-10-05 10:00:08.324 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:00:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:09 localhost nova_compute[297130]: 2025-10-05 10:00:09.320 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:10 localhost nova_compute[297130]: 2025-10-05 10:00:10.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:11 localhost nova_compute[297130]: 2025-10-05 10:00:11.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:00:11 Oct 5 06:00:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:00:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:00:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['volumes', 'manila_metadata', '.mgr', 'manila_data', 'images', 'backups', 'vms'] Oct 5 06:00:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:00:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Oct 5 06:00:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:00:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:00:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:00:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:00:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:00:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:00:11 localhost podman[321082]: 2025-10-05 10:00:11.931237156 +0000 UTC m=+0.090561763 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:00:11 localhost podman[321082]: 2025-10-05 10:00:11.939052674 +0000 UTC m=+0.098377281 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:00:11 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:00:11 localhost podman[321083]: 2025-10-05 10:00:11.9870075 +0000 UTC m=+0.143001345 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:00:12 localhost podman[321083]: 2025-10-05 10:00:12.052228787 +0000 UTC m=+0.208222602 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:00:12 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.290 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.290 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.291 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.291 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.292 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:00:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:00:12 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4098933533' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.746 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.916 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.917 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11935MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.917 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.918 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.974 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.974 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:00:12 localhost nova_compute[297130]: 2025-10-05 10:00:12.995 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:00:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:00:13 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2977809858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:00:13 localhost nova_compute[297130]: 2025-10-05 10:00:13.457 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:00:13 localhost nova_compute[297130]: 2025-10-05 10:00:13.463 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:00:13 localhost nova_compute[297130]: 2025-10-05 10:00:13.480 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:00:13 localhost nova_compute[297130]: 2025-10-05 10:00:13.482 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:00:13 localhost nova_compute[297130]: 2025-10-05 10:00:13.483 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:00:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:16 localhost openstack_network_exporter[250246]: ERROR 10:00:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:00:16 localhost openstack_network_exporter[250246]: ERROR 10:00:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:16 localhost openstack_network_exporter[250246]: ERROR 10:00:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:16 localhost openstack_network_exporter[250246]: ERROR 10:00:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:00:16 localhost openstack_network_exporter[250246]: Oct 5 06:00:16 localhost openstack_network_exporter[250246]: ERROR 10:00:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:00:16 localhost openstack_network_exporter[250246]: Oct 5 06:00:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:00:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:00:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:00:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:00:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:00:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:00:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:00:20 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:00:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:00:20 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:00:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:00:20 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 40e0d90b-7284-4e50-a5f7-8e28da1f475b (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:00:20 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 40e0d90b-7284-4e50-a5f7-8e28da1f475b (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:00:20 localhost ceph-mgr[301363]: [progress INFO root] Completed event 40e0d90b-7284-4e50-a5f7-8e28da1f475b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:00:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:00:20 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:00:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:00:20.399 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:00:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:00:20.400 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:00:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:00:20.400 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:00:20 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:00:20 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:21 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:00:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:00:22 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:00:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:00:24 localhost systemd[1]: tmp-crun.CERsdW.mount: Deactivated successfully. Oct 5 06:00:24 localhost podman[321314]: 2025-10-05 10:00:24.918477201 +0000 UTC m=+0.087304663 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:00:24 localhost podman[321315]: 2025-10-05 10:00:24.930431384 +0000 UTC m=+0.093127786 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:00:24 localhost podman[321314]: 2025-10-05 10:00:24.931128263 +0000 UTC m=+0.099955705 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 06:00:24 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:00:25 localhost podman[321315]: 2025-10-05 10:00:25.015185366 +0000 UTC m=+0.177881748 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:00:25 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:00:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:26 localhost podman[248157]: time="2025-10-05T10:00:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:00:26 localhost podman[248157]: @ - - [05/Oct/2025:10:00:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 06:00:26 localhost podman[248157]: @ - - [05/Oct/2025:10:00:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19310 "" "Go-http-client/1.1" Oct 5 06:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:00:26 localhost podman[321357]: 2025-10-05 10:00:26.909909974 +0000 UTC m=+0.078427236 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Oct 5 06:00:26 localhost podman[321357]: 2025-10-05 10:00:26.924174201 +0000 UTC m=+0.092691473 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, version=9.6, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350) Oct 5 06:00:26 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:00:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:00:32 localhost podman[321378]: 2025-10-05 10:00:32.922363975 +0000 UTC m=+0.088977880 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent) Oct 5 06:00:32 localhost podman[321378]: 2025-10-05 10:00:32.952355381 +0000 UTC m=+0.118969266 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:00:32 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:00:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:00:35.029 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:00:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:00:35.030 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:00:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:00:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.882 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:00:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:00:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:38 localhost podman[321396]: 2025-10-05 10:00:38.923933153 +0000 UTC m=+0.087763406 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd) Oct 5 06:00:38 localhost podman[321396]: 2025-10-05 10:00:38.940291759 +0000 UTC m=+0.104122002 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd) Oct 5 06:00:38 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:00:39 localhost podman[321397]: 2025-10-05 10:00:39.033833435 +0000 UTC m=+0.194618773 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:00:39 localhost podman[321397]: 2025-10-05 10:00:39.04189632 +0000 UTC m=+0.202681638 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:00:39 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:00:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e88 e88: 6 total, 6 up, 6 in Oct 5 06:00:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Oct 5 06:00:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 06:00:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:00:42.031 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:00:42 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 e89: 6 total, 6 up, 6 in Oct 5 06:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:00:42 localhost systemd[1]: tmp-crun.g4tG7V.mount: Deactivated successfully. Oct 5 06:00:42 localhost podman[321439]: 2025-10-05 10:00:42.911349735 +0000 UTC m=+0.073414816 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:00:42 localhost podman[321439]: 2025-10-05 10:00:42.924312935 +0000 UTC m=+0.086378056 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:00:42 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:00:42 localhost systemd[1]: tmp-crun.0LliGN.mount: Deactivated successfully. Oct 5 06:00:42 localhost podman[321440]: 2025-10-05 10:00:42.97543813 +0000 UTC m=+0.134966251 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 06:00:43 localhost podman[321440]: 2025-10-05 10:00:43.097193772 +0000 UTC m=+0.256721863 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:00:43 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:00:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 145 MiB data, 688 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s Oct 5 06:00:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 145 MiB data, 688 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 42 op/s Oct 5 06:00:46 localhost openstack_network_exporter[250246]: ERROR 10:00:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:46 localhost openstack_network_exporter[250246]: ERROR 10:00:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:00:46 localhost openstack_network_exporter[250246]: ERROR 10:00:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:00:46 localhost openstack_network_exporter[250246]: ERROR 10:00:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:00:46 localhost openstack_network_exporter[250246]: Oct 5 06:00:46 localhost openstack_network_exporter[250246]: ERROR 10:00:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:00:46 localhost openstack_network_exporter[250246]: Oct 5 06:00:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s Oct 5 06:00:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s Oct 5 06:00:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s Oct 5 06:00:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 3.1 KiB/s rd, 475 B/s wr, 4 op/s Oct 5 06:00:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:00:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:00:55 localhost podman[321483]: 2025-10-05 10:00:55.15391903 +0000 UTC m=+0.092065156 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:00:55 localhost podman[321483]: 2025-10-05 10:00:55.168291641 +0000 UTC m=+0.106437747 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:00:55 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:00:55 localhost podman[321484]: 2025-10-05 10:00:55.254486502 +0000 UTC m=+0.187172406 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:00:55 localhost podman[321484]: 2025-10-05 10:00:55.293334215 +0000 UTC m=+0.226020089 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:00:55 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:00:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s Oct 5 06:00:56 localhost podman[248157]: time="2025-10-05T10:00:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:00:56 localhost podman[248157]: @ - - [05/Oct/2025:10:00:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 06:00:56 localhost podman[248157]: @ - - [05/Oct/2025:10:00:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19316 "" "Go-http-client/1.1" Oct 5 06:00:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s Oct 5 06:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:00:57 localhost podman[321525]: 2025-10-05 10:00:57.927156415 +0000 UTC m=+0.090362598 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:00:57 localhost podman[321525]: 2025-10-05 10:00:57.965387481 +0000 UTC m=+0.128593634 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=) Oct 5 06:00:57 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:00:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:00:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:03 localhost sshd[321556]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:01:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:01:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:03 localhost podman[321557]: 2025-10-05 10:01:03.936236263 +0000 UTC m=+0.097919169 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:03 localhost podman[321557]: 2025-10-05 10:01:03.970342644 +0000 UTC m=+0.132025550 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent) Oct 5 06:01:03 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:01:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:07 localhost ovn_controller[157556]: 2025-10-05T10:01:07Z|00039|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory Oct 5 06:01:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:01:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:01:09 localhost podman[321575]: 2025-10-05 10:01:09.915571923 +0000 UTC m=+0.080459752 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:01:09 localhost podman[321575]: 2025-10-05 10:01:09.929201292 +0000 UTC m=+0.094089101 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:01:09 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:01:10 localhost podman[321576]: 2025-10-05 10:01:10.021934296 +0000 UTC m=+0.181810186 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:01:10 localhost podman[321576]: 2025-10-05 10:01:10.058258648 +0000 UTC m=+0.218134488 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:01:10 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:01:10 localhost nova_compute[297130]: 2025-10-05 10:01:10.484 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:10 localhost nova_compute[297130]: 2025-10-05 10:01:10.485 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:10 localhost nova_compute[297130]: 2025-10-05 10:01:10.485 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:01:10 localhost nova_compute[297130]: 2025-10-05 10:01:10.485 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:01:10 localhost nova_compute[297130]: 2025-10-05 10:01:10.509 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:01:10 localhost nova_compute[297130]: 2025-10-05 10:01:10.510 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:11 localhost nova_compute[297130]: 2025-10-05 10:01:11.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:11 localhost nova_compute[297130]: 2025-10-05 10:01:11.298 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:01:11 Oct 5 06:01:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:01:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:01:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_data', 'images', 'vms', 'manila_metadata', 'volumes', 'backups', '.mgr'] Oct 5 06:01:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:01:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:01:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:01:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:01:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:01:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:01:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:01:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:01:12 localhost nova_compute[297130]: 2025-10-05 10:01:12.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:13 localhost nova_compute[297130]: 2025-10-05 10:01:13.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:13 localhost nova_compute[297130]: 2025-10-05 10:01:13.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:01:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:01:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:13 localhost podman[321617]: 2025-10-05 10:01:13.917772606 +0000 UTC m=+0.082172390 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:13 localhost podman[321617]: 2025-10-05 10:01:13.930283215 +0000 UTC m=+0.094683039 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:13 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:01:14 localhost systemd[1]: tmp-crun.jxDGeY.mount: Deactivated successfully. Oct 5 06:01:14 localhost podman[321618]: 2025-10-05 10:01:14.027444562 +0000 UTC m=+0.188249266 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:01:14 localhost podman[321618]: 2025-10-05 10:01:14.070399659 +0000 UTC m=+0.231204393 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:01:14 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.301 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.301 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.302 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.302 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.303 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:01:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:01:14 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3174812813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.796 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.989 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.991 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11932MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.991 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:01:14 localhost nova_compute[297130]: 2025-10-05 10:01:14.992 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.053 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.054 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.519 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:01:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:15.615 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:15Z, description=, device_id=e9f6d1b2-c843-4f8a-a56d-2b4bef1b97ae, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bddc7dbd-b9be-4144-b0b1-e1b03149abb7, ip_allocation=immediate, mac_address=fa:16:3e:4f:f9:fb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=110, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:01:15Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:01:15 localhost systemd[1]: tmp-crun.wzyF2f.mount: Deactivated successfully. Oct 5 06:01:15 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:01:15 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:01:15 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:01:15 localhost podman[321720]: 2025-10-05 10:01:15.826453903 +0000 UTC m=+0.054913090 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:01:15 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4248656278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.926 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.933 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.949 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.952 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:01:15 localhost nova_compute[297130]: 2025-10-05 10:01:15.952 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:01:16 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:16.026 271653 INFO neutron.agent.dhcp.agent [None req-0f51975c-da30-46b9-99ba-d386327a739e - - - - - -] DHCP configuration for ports {'bddc7dbd-b9be-4144-b0b1-e1b03149abb7'} is completed#033[00m Oct 5 06:01:16 localhost openstack_network_exporter[250246]: ERROR 10:01:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:16 localhost openstack_network_exporter[250246]: ERROR 10:01:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:16 localhost openstack_network_exporter[250246]: ERROR 10:01:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:01:16 localhost openstack_network_exporter[250246]: ERROR 10:01:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:01:16 localhost openstack_network_exporter[250246]: Oct 5 06:01:16 localhost openstack_network_exporter[250246]: ERROR 10:01:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:01:16 localhost openstack_network_exporter[250246]: Oct 5 06:01:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:01:20.400 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:01:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:01:20.400 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:01:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:01:20.401 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:01:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:01:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:01:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:01:21 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:01:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:01:21 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev f920ab3e-2d9a-4992-9a44-3522bbe6ae2c (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:01:21 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev f920ab3e-2d9a-4992-9a44-3522bbe6ae2c (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:01:21 localhost ceph-mgr[301363]: [progress INFO root] Completed event f920ab3e-2d9a-4992-9a44-3522bbe6ae2c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:01:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:01:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:01:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:21 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:01:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:01:21 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:01:21 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:01:21 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:01:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:01:22.525 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:01:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:01:22.527 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:01:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:01:25 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:25.872 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:25Z, description=, device_id=d1c8d9ba-5ba2-494a-afc4-a645ce122b91, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e8055303-e250-41ac-86e4-c14d0eb98b50, ip_allocation=immediate, mac_address=fa:16:3e:2e:da:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=184, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:01:25Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:01:25 localhost podman[321833]: 2025-10-05 10:01:25.935253852 +0000 UTC m=+0.094568056 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:25 localhost podman[321833]: 2025-10-05 10:01:25.972636973 +0000 UTC m=+0.131951127 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:25 localhost podman[321834]: 2025-10-05 10:01:25.988246508 +0000 UTC m=+0.146191324 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:01:25 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:01:25 localhost podman[321834]: 2025-10-05 10:01:25.997212138 +0000 UTC m=+0.155156964 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:01:26 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:01:26 localhost podman[248157]: time="2025-10-05T10:01:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:01:26 localhost podman[248157]: @ - - [05/Oct/2025:10:01:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 06:01:26 localhost podman[248157]: @ - - [05/Oct/2025:10:01:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19318 "" "Go-http-client/1.1" Oct 5 06:01:26 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:01:26 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:01:26 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:01:26 localhost podman[321894]: 2025-10-05 10:01:26.152615167 +0000 UTC m=+0.081868052 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:01:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:26.436 271653 INFO neutron.agent.dhcp.agent [None req-2692d16e-e367-498b-b9c1-63c5f0a6273e - - - - - -] DHCP configuration for ports {'e8055303-e250-41ac-86e4-c14d0eb98b50'} is completed#033[00m Oct 5 06:01:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:01:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:28 localhost podman[321916]: 2025-10-05 10:01:28.926908662 +0000 UTC m=+0.089680549 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=) Oct 5 06:01:28 localhost podman[321916]: 2025-10-05 10:01:28.97133257 +0000 UTC m=+0.134104507 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm) Oct 5 06:01:28 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:01:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.876074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489876179, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1962, "num_deletes": 251, "total_data_size": 3458366, "memory_usage": 3500064, "flush_reason": "Manual Compaction"} Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489895732, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2223010, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15107, "largest_seqno": 17064, "table_properties": {"data_size": 2215623, "index_size": 4340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16584, "raw_average_key_size": 21, "raw_value_size": 2200280, "raw_average_value_size": 2795, "num_data_blocks": 188, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658356, "oldest_key_time": 1759658356, "file_creation_time": 1759658489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19758 microseconds, and 6713 cpu microseconds. Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.895845) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2223010 bytes OK Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.895874) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.897721) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.897744) EVENT_LOG_v1 {"time_micros": 1759658489897738, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.897801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3449397, prev total WAL file size 3449397, number of live WAL files 2. Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.899032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2170KB)], [21(16MB)] Oct 5 06:01:29 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658489899099, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19183972, "oldest_snapshot_seqno": -1} Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12196 keys, 16719241 bytes, temperature: kUnknown Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658490003982, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16719241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16650971, "index_size": 36681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 329098, "raw_average_key_size": 26, "raw_value_size": 16444488, "raw_average_value_size": 1348, "num_data_blocks": 1375, "num_entries": 12196, "num_filter_entries": 12196, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.004404) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16719241 bytes Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.006035) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 159.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 16.2 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(16.2) write-amplify(7.5) OK, records in: 12728, records dropped: 532 output_compression: NoCompression Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.006068) EVENT_LOG_v1 {"time_micros": 1759658490006054, "job": 10, "event": "compaction_finished", "compaction_time_micros": 105046, "compaction_time_cpu_micros": 48599, "output_level": 6, "num_output_files": 1, "total_output_size": 16719241, "num_input_records": 12728, "num_output_records": 12196, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658490006537, "job": 10, "event": "table_file_deletion", "file_number": 23} Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658490008858, "job": 10, "event": "table_file_deletion", "file_number": 21} Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:29.898898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.008954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.008961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.008965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.008968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:30 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:01:30.008971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:01:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:01:32.524 2 INFO neutron.agent.securitygroups_rpc [None req-369a5acc-81db-4453-b416-f03a8cf26524 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:01:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:01:32.529 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:01:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:01:34 localhost podman[321936]: 2025-10-05 10:01:34.912334151 +0000 UTC m=+0.080696509 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:01:34 localhost podman[321936]: 2025-10-05 10:01:34.948371195 +0000 UTC m=+0.116733523 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:34 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:01:35 localhost neutron_sriov_agent[264647]: 2025-10-05 10:01:35.256 2 INFO neutron.agent.securitygroups_rpc [None req-20934a8c-c6ee-421b-bc8e-071eddd99f39 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:01:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:01:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:01:40 localhost podman[321953]: 2025-10-05 10:01:40.912005559 +0000 UTC m=+0.079174435 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:01:40 localhost podman[321953]: 2025-10-05 10:01:40.923642467 +0000 UTC m=+0.090811373 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3) Oct 5 06:01:40 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:01:40 localhost podman[321954]: 2025-10-05 10:01:40.972703788 +0000 UTC m=+0.137171160 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:01:41 localhost podman[321954]: 2025-10-05 10:01:41.010134611 +0000 UTC m=+0.174602003 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:01:41 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:01:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:41 localhost ovn_controller[157556]: 2025-10-05T10:01:41Z|00040|memory|INFO|peak resident set size grew 52% in last 2275.7 seconds, from 12688 kB to 19340 kB Oct 5 06:01:41 localhost ovn_controller[157556]: 2025-10-05T10:01:41Z|00041|memory|INFO|idl-cells-OVN_Southbound:6944 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:181 lflow-cache-entries-cache-matches:228 lflow-cache-size-KB:701 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:325 ofctrl_installed_flow_usage-KB:238 ofctrl_sb_flow_ref_usage-KB:126 Oct 5 06:01:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:01:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:01:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:01:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:01:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:01:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:01:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:01:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:01:44 localhost systemd[1]: tmp-crun.Wyg3oj.mount: Deactivated successfully. Oct 5 06:01:44 localhost podman[321995]: 2025-10-05 10:01:44.917524118 +0000 UTC m=+0.081478558 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 06:01:44 localhost podman[321995]: 2025-10-05 10:01:44.954388786 +0000 UTC m=+0.118343236 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true) Oct 5 06:01:44 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:01:44 localhost podman[321996]: 2025-10-05 10:01:44.975507293 +0000 UTC m=+0.138358403 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:01:45 localhost podman[321996]: 2025-10-05 10:01:45.071281181 +0000 UTC m=+0.234132281 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:01:45 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:01:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Oct 5 06:01:46 localhost openstack_network_exporter[250246]: ERROR 10:01:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:46 localhost openstack_network_exporter[250246]: ERROR 10:01:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:01:46 localhost openstack_network_exporter[250246]: ERROR 10:01:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:01:46 localhost openstack_network_exporter[250246]: ERROR 10:01:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:01:46 localhost openstack_network_exporter[250246]: Oct 5 06:01:46 localhost openstack_network_exporter[250246]: ERROR 10:01:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:01:46 localhost openstack_network_exporter[250246]: Oct 5 06:01:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Oct 5 06:01:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Oct 5 06:01:50 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:50.772 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:50Z, description=, device_id=98de158a-6996-45af-ad6c-8e7f89620384, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7671ebb0-6404-47da-91f4-47cab979b84d, ip_allocation=immediate, mac_address=fa:16:3e:79:83:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=347, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:01:50Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:01:51 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:01:51 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:01:51 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:01:51 localhost podman[322055]: 2025-10-05 10:01:51.012910995 +0000 UTC m=+0.060446713 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:01:51 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:51.302 271653 INFO neutron.agent.dhcp.agent [None req-e618685a-b2a0-4a34-99cc-fb24625c2f58 - - - - - -] DHCP configuration for ports {'7671ebb0-6404-47da-91f4-47cab979b84d'} is completed#033[00m Oct 5 06:01:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s Oct 5 06:01:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 192 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s Oct 5 06:01:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:54.425 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:01:54Z, description=, device_id=9f07f874-3ab3-42d6-af9e-3fd32cc284c9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4b54091e-0615-4394-812a-3c374c16c705, ip_allocation=immediate, mac_address=fa:16:3e:bf:ab:4e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=357, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:01:54Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:01:54 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 5 addresses Oct 5 06:01:54 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:01:54 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:01:54 localhost podman[322092]: 2025-10-05 10:01:54.637835923 +0000 UTC m=+0.059437526 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:01:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:01:54.886 271653 INFO neutron.agent.dhcp.agent [None req-bbb16fc5-d464-4eaf-a0d4-d7a738170f90 - - - - - -] DHCP configuration for ports {'4b54091e-0615-4394-812a-3c374c16c705'} is completed#033[00m Oct 5 06:01:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 192 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s Oct 5 06:01:56 localhost podman[248157]: time="2025-10-05T10:01:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:01:56 localhost podman[248157]: @ - - [05/Oct/2025:10:01:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 06:01:56 localhost podman[248157]: @ - - [05/Oct/2025:10:01:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19310 "" "Go-http-client/1.1" Oct 5 06:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:01:56 localhost podman[322112]: 2025-10-05 10:01:56.913976521 +0000 UTC m=+0.083683448 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:01:56 localhost podman[322112]: 2025-10-05 10:01:56.928317633 +0000 UTC m=+0.098024560 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:01:56 localhost podman[322113]: 2025-10-05 10:01:56.968738269 +0000 UTC m=+0.135575318 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:01:56 localhost podman[322113]: 2025-10-05 10:01:56.977522969 +0000 UTC m=+0.144360078 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:01:56 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:01:57 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:01:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s Oct 5 06:01:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:01:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s Oct 5 06:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:01:59 localhost podman[322154]: 2025-10-05 10:01:59.91374978 +0000 UTC m=+0.079200416 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container) Oct 5 06:01:59 localhost podman[322154]: 2025-10-05 10:01:59.924813113 +0000 UTC m=+0.090263809 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:01:59 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:02:01 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:01.391 2 INFO neutron.agent.securitygroups_rpc [req-73c8884d-fcac-478e-b650-1e996a105094 req-ac216bf8-85a5-4b0a-a857-5ba96f861d9f b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['6bdda02f-808a-473f-b12a-e76a2f226c0b']#033[00m Oct 5 06:02:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s Oct 5 06:02:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:01.794 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:01Z, description=, device_id=9ba53606-4aec-427b-bd79-239261fe902f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=84597dc0-9db3-47cd-a0c1-32fc77cc1d23, ip_allocation=immediate, mac_address=fa:16:3e:c8:17:14, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=410, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:02:01Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:02:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:01.882 271653 INFO neutron.agent.linux.ip_lib [None req-ab58d08a-7ef7-446a-89dd-dbad4d52e47c - - - - - -] Device tapabd8ebc9-bd cannot be used as it has no MAC address#033[00m Oct 5 06:02:01 localhost kernel: device tapabd8ebc9-bd entered promiscuous mode Oct 5 06:02:01 localhost NetworkManager[5970]: [1759658521.9069] manager: (tapabd8ebc9-bd): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Oct 5 06:02:01 localhost ovn_controller[157556]: 2025-10-05T10:02:01Z|00042|binding|INFO|Claiming lport abd8ebc9-bda1-4083-a93e-d6e6832b93e2 for this chassis. Oct 5 06:02:01 localhost ovn_controller[157556]: 2025-10-05T10:02:01Z|00043|binding|INFO|abd8ebc9-bda1-4083-a93e-d6e6832b93e2: Claiming unknown Oct 5 06:02:01 localhost systemd-udevd[322198]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:01.919 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-3f4911a6-9be1-4156-824f-838e2bac1e4b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f4911a6-9be1-4156-824f-838e2bac1e4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7117de923d14d3491e796ec245562e0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e574b21f-048d-453d-bca1-af20eedc296c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=abd8ebc9-bda1-4083-a93e-d6e6832b93e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:01.922 163201 INFO neutron.agent.ovn.metadata.agent [-] Port abd8ebc9-bda1-4083-a93e-d6e6832b93e2 in datapath 3f4911a6-9be1-4156-824f-838e2bac1e4b bound to our chassis#033[00m Oct 5 06:02:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:01.925 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Port 00747696-ff38-4b90-82ff-9b7a63d07434 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:02:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:01.925 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f4911a6-9be1-4156-824f-838e2bac1e4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:01.928 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b6a5eb-615d-49bb-b1d7-a90490478923]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost ovn_controller[157556]: 2025-10-05T10:02:01Z|00044|binding|INFO|Setting lport abd8ebc9-bda1-4083-a93e-d6e6832b93e2 ovn-installed in OVS Oct 5 06:02:01 localhost ovn_controller[157556]: 2025-10-05T10:02:01Z|00045|binding|INFO|Setting lport abd8ebc9-bda1-4083-a93e-d6e6832b93e2 up in Southbound Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost journal[237639]: ethtool ioctl error on tapabd8ebc9-bd: No such device Oct 5 06:02:01 localhost podman[322211]: 2025-10-05 10:02:01.997363806 +0000 UTC m=+0.044097857 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:02:01 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 6 addresses Oct 5 06:02:02 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:02 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:02 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:02.293 271653 INFO neutron.agent.dhcp.agent [None req-4569f3c3-332a-4b64-a089-fb05e78ee685 - - - - - -] DHCP configuration for ports {'84597dc0-9db3-47cd-a0c1-32fc77cc1d23'} is completed#033[00m Oct 5 06:02:02 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:02.357 2 INFO neutron.agent.securitygroups_rpc [req-7f63b2f6-ddd6-49c5-bce7-9008b92d1a67 req-6f282e34-8798-4b91-a623-4aa0b04785f4 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['3629377b-e072-4903-a05a-f6ff16e22cf7']#033[00m Oct 5 06:02:02 localhost podman[322292]: Oct 5 06:02:02 localhost podman[322292]: 2025-10-05 10:02:02.854911107 +0000 UTC m=+0.086832385 container create 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:02:02 localhost systemd[1]: Started libpod-conmon-52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883.scope. Oct 5 06:02:02 localhost systemd[1]: Started libcrun container. Oct 5 06:02:02 localhost podman[322292]: 2025-10-05 10:02:02.812867117 +0000 UTC m=+0.044788425 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:02:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aca00c32c75cabb9276c0698ac2491b6a89b1d59c1367bf888981e5707f38f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:02 localhost podman[322292]: 2025-10-05 10:02:02.924623552 +0000 UTC m=+0.156544830 container init 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:02:02 localhost podman[322292]: 2025-10-05 10:02:02.932885498 +0000 UTC m=+0.164806776 container start 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:02:02 localhost dnsmasq[322310]: started, version 2.85 cachesize 150 Oct 5 06:02:02 localhost dnsmasq[322310]: DNS service limited to local subnets Oct 5 06:02:02 localhost dnsmasq[322310]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:02:02 localhost dnsmasq[322310]: warning: no upstream servers configured Oct 5 06:02:02 localhost dnsmasq-dhcp[322310]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:02:02 localhost dnsmasq[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/addn_hosts - 0 addresses Oct 5 06:02:02 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/host Oct 5 06:02:02 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/opts Oct 5 06:02:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:03.073 271653 INFO neutron.agent.dhcp.agent [None req-f38b9e17-0b05-45c6-91c4-5402b0c20f0a - - - - - -] DHCP configuration for ports {'d58f6505-9b69-453b-bcc6-99e003b7031d'} is completed#033[00m Oct 5 06:02:03 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:03.363 2 INFO neutron.agent.securitygroups_rpc [req-9dc7d59c-86cf-4175-be38-fa9e3d7cc1af req-b7105b62-6413-452f-a2bf-ca0e52fb3f07 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['8c680d2e-9a99-4414-88da-395392f19bf8']#033[00m Oct 5 06:02:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s Oct 5 06:02:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:04.468 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:04Z, description=, device_id=9ba53606-4aec-427b-bd79-239261fe902f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=41b07752-deb8-4a86-84c6-8bc01773b268, ip_allocation=immediate, mac_address=fa:16:3e:93:6f:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:01:59Z, description=, dns_domain=, id=3f4911a6-9be1-4156-824f-838e2bac1e4b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1362108766-network, port_security_enabled=True, project_id=e7117de923d14d3491e796ec245562e0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14030, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=399, status=ACTIVE, subnets=['a93e5846-a6e1-4c80-b71b-4ef60ec4bb33'], tags=[], tenant_id=e7117de923d14d3491e796ec245562e0, updated_at=2025-10-05T10:02:00Z, vlan_transparent=None, network_id=3f4911a6-9be1-4156-824f-838e2bac1e4b, port_security_enabled=False, project_id=e7117de923d14d3491e796ec245562e0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=435, status=DOWN, tags=[], tenant_id=e7117de923d14d3491e796ec245562e0, updated_at=2025-10-05T10:02:04Z on network 3f4911a6-9be1-4156-824f-838e2bac1e4b#033[00m Oct 5 06:02:04 localhost podman[322327]: 2025-10-05 10:02:04.688653572 +0000 UTC m=+0.059491858 container kill 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:02:04 localhost dnsmasq[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/addn_hosts - 1 addresses Oct 5 06:02:04 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/host Oct 5 06:02:04 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/opts Oct 5 06:02:04 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:04.911 2 INFO neutron.agent.securitygroups_rpc [req-5b7f7d3b-1c30-4ad1-ad78-2f9c38b249e5 req-40cbc6af-2bd3-4a54-bd00-eed01726fc22 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['5dddcefb-2e07-4a82-bdc7-ae53daf271ad']#033[00m Oct 5 06:02:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:04.950 271653 INFO neutron.agent.dhcp.agent [None req-aff94edc-bce1-4bcc-bc94-93f869f5a566 - - - - - -] DHCP configuration for ports {'41b07752-deb8-4a86-84c6-8bc01773b268'} is completed#033[00m Oct 5 06:02:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s Oct 5 06:02:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Oct 5 06:02:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:05.739 2 INFO neutron.agent.securitygroups_rpc [req-9c7c0f23-7c68-40ed-847c-e1348564d729 req-a1847abc-2bf9-4a6d-8989-b4ba28a6ff1e b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['dbfea3ca-3964-451b-9539-59e59bd91033']#033[00m Oct 5 06:02:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:02:05 localhost podman[322350]: 2025-10-05 10:02:05.927743282 +0000 UTC m=+0.089740454 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 06:02:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:05.926 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:04Z, description=, device_id=9ba53606-4aec-427b-bd79-239261fe902f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=41b07752-deb8-4a86-84c6-8bc01773b268, ip_allocation=immediate, mac_address=fa:16:3e:93:6f:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:01:59Z, description=, dns_domain=, id=3f4911a6-9be1-4156-824f-838e2bac1e4b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1362108766-network, port_security_enabled=True, project_id=e7117de923d14d3491e796ec245562e0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14030, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=399, status=ACTIVE, subnets=['a93e5846-a6e1-4c80-b71b-4ef60ec4bb33'], tags=[], tenant_id=e7117de923d14d3491e796ec245562e0, updated_at=2025-10-05T10:02:00Z, vlan_transparent=None, network_id=3f4911a6-9be1-4156-824f-838e2bac1e4b, port_security_enabled=False, project_id=e7117de923d14d3491e796ec245562e0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=435, status=DOWN, tags=[], tenant_id=e7117de923d14d3491e796ec245562e0, updated_at=2025-10-05T10:02:04Z on network 3f4911a6-9be1-4156-824f-838e2bac1e4b#033[00m Oct 5 06:02:05 localhost systemd[1]: tmp-crun.5VN2OA.mount: Deactivated successfully. Oct 5 06:02:05 localhost podman[322350]: 2025-10-05 10:02:05.936246924 +0000 UTC m=+0.098244096 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:05 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:02:06 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:06.086 2 INFO neutron.agent.securitygroups_rpc [req-b4f4f790-735f-4fea-adc4-2e1e90b281a8 req-b22baf81-c307-4826-b189-de8fd7b88120 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['dbfea3ca-3964-451b-9539-59e59bd91033']#033[00m Oct 5 06:02:06 localhost dnsmasq[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/addn_hosts - 1 addresses Oct 5 06:02:06 localhost podman[322385]: 2025-10-05 10:02:06.147461327 +0000 UTC m=+0.060961187 container kill 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:06 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/host Oct 5 06:02:06 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/opts Oct 5 06:02:06 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:06.404 271653 INFO neutron.agent.dhcp.agent [None req-60713b74-849e-468d-87f2-00dd83a42578 - - - - - -] DHCP configuration for ports {'41b07752-deb8-4a86-84c6-8bc01773b268'} is completed#033[00m Oct 5 06:02:06 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:06.535 2 INFO neutron.agent.securitygroups_rpc [req-9f948287-844a-4842-89bf-12bb1b82e63f req-3bd4bd2e-9087-420d-929b-49ab12f9cd68 b349345ade4d4e109c01d40faf4d8eb9 050458dc944c4d96a370486dea13087e - - default default] Security group rule updated ['dbfea3ca-3964-451b-9539-59e59bd91033']#033[00m Oct 5 06:02:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s Oct 5 06:02:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 56 op/s Oct 5 06:02:10 localhost nova_compute[297130]: 2025-10-05 10:02:10.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:10 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:10.409 2 INFO neutron.agent.securitygroups_rpc [None req-9cd60413-135f-47c4-ab43-0a478fa866e3 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:11 localhost nova_compute[297130]: 2025-10-05 10:02:11.294 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:11 localhost nova_compute[297130]: 2025-10-05 10:02:11.295 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:02:11 localhost nova_compute[297130]: 2025-10-05 10:02:11.295 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:02:11 localhost nova_compute[297130]: 2025-10-05 10:02:11.323 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:02:11 localhost nova_compute[297130]: 2025-10-05 10:02:11.323 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:02:11 Oct 5 06:02:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:02:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:02:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['images', 'vms', 'backups', 'manila_data', 'volumes', 'manila_metadata', '.mgr'] Oct 5 06:02:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:02:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 56 op/s Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006559399282291085 of space, bias 1.0, pg target 1.311879856458217 quantized to 32 (current 32) Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:02:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Oct 5 06:02:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:02:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:02:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:02:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:02:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:02:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:02:11 localhost systemd[1]: tmp-crun.YZmz9x.mount: Deactivated successfully. Oct 5 06:02:11 localhost podman[322406]: 2025-10-05 10:02:11.918699811 +0000 UTC m=+0.083524624 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:02:11 localhost podman[322406]: 2025-10-05 10:02:11.929921777 +0000 UTC m=+0.094746630 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:02:11 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:02:12 localhost podman[322407]: 2025-10-05 10:02:12.017920754 +0000 UTC m=+0.179175849 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:02:12 localhost podman[322407]: 2025-10-05 10:02:12.055288265 +0000 UTC m=+0.216543430 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:02:12 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:02:12 localhost nova_compute[297130]: 2025-10-05 10:02:12.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:12 localhost nova_compute[297130]: 2025-10-05 10:02:12.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:12 localhost nova_compute[297130]: 2025-10-05 10:02:12.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:12 localhost nova_compute[297130]: 2025-10-05 10:02:12.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:02:12 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 5 addresses Oct 5 06:02:12 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:12 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:12 localhost podman[322463]: 2025-10-05 10:02:12.663913661 +0000 UTC m=+0.060042251 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:02:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:12 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/93981544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:13 localhost nova_compute[297130]: 2025-10-05 10:02:13.301 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:13 localhost nova_compute[297130]: 2025-10-05 10:02:13.301 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:13 localhost nova_compute[297130]: 2025-10-05 10:02:13.302 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:02:13 localhost nova_compute[297130]: 2025-10-05 10:02:13.326 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:02:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 225 MiB data, 867 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s Oct 5 06:02:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:14 localhost nova_compute[297130]: 2025-10-05 10:02:14.297 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e90 e90: 6 total, 6 up, 6 in Oct 5 06:02:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:14.494 2 INFO neutron.agent.securitygroups_rpc [None req-0ae8405b-5f6b-48b7-adaa-b729d895987d b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:15 localhost nova_compute[297130]: 2025-10-05 10:02:15.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:15 localhost nova_compute[297130]: 2025-10-05 10:02:15.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:02:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 225 MiB data, 867 MiB used, 41 GiB / 42 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 81 op/s Oct 5 06:02:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:02:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:02:15 localhost podman[322484]: 2025-10-05 10:02:15.896530804 +0000 UTC m=+0.060624287 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:15 localhost podman[322484]: 2025-10-05 10:02:15.908149702 +0000 UTC m=+0.072243195 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:15 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:02:15 localhost podman[322485]: 2025-10-05 10:02:15.971953686 +0000 UTC m=+0.132405550 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:02:16 localhost podman[322485]: 2025-10-05 10:02:16.039364178 +0000 UTC m=+0.199816022 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:02:16 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.301 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.301 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.302 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.302 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.302 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e91 e91: 6 total, 6 up, 6 in Oct 5 06:02:16 localhost openstack_network_exporter[250246]: ERROR 10:02:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:16 localhost openstack_network_exporter[250246]: ERROR 10:02:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:16 localhost openstack_network_exporter[250246]: ERROR 10:02:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:02:16 localhost openstack_network_exporter[250246]: ERROR 10:02:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:02:16 localhost openstack_network_exporter[250246]: Oct 5 06:02:16 localhost openstack_network_exporter[250246]: ERROR 10:02:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:02:16 localhost openstack_network_exporter[250246]: Oct 5 06:02:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:16 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1987627801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.769 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.977 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.979 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11872MB free_disk=41.700897216796875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.979 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:16 localhost nova_compute[297130]: 2025-10-05 10:02:16.980 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.046 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.048 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:02:17 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:17.430 271653 INFO neutron.agent.linux.ip_lib [None req-6708b6d4-c5a8-4aaa-b9e2-bb529abd2ac6 - - - - - -] Device tapba67cae5-6a cannot be used as it has no MAC address#033[00m Oct 5 06:02:17 localhost kernel: device tapba67cae5-6a entered promiscuous mode Oct 5 06:02:17 localhost NetworkManager[5970]: [1759658537.4608] manager: (tapba67cae5-6a): new Generic device (/org/freedesktop/NetworkManager/Devices/16) Oct 5 06:02:17 localhost ovn_controller[157556]: 2025-10-05T10:02:17Z|00046|binding|INFO|Claiming lport ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 for this chassis. Oct 5 06:02:17 localhost ovn_controller[157556]: 2025-10-05T10:02:17Z|00047|binding|INFO|ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2: Claiming unknown Oct 5 06:02:17 localhost systemd-udevd[322562]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:17.477 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-8b0fb53c-a380-4532-8d67-7340b4a78d0a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b0fb53c-a380-4532-8d67-7340b4a78d0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88e675d94b7c464fab0695a788e43e9b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c80bcdd5-77a2-46a7-af24-2da29c0bd139, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.478 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:02:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:17.481 163201 INFO neutron.agent.ovn.metadata.agent [-] Port ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 in datapath 8b0fb53c-a380-4532-8d67-7340b4a78d0a bound to our chassis#033[00m Oct 5 06:02:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:17.483 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8b0fb53c-a380-4532-8d67-7340b4a78d0a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:02:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:17.484 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0613d6-043c-46ae-8940-2983f6dd1040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost ovn_controller[157556]: 2025-10-05T10:02:17Z|00048|binding|INFO|Setting lport ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 ovn-installed in OVS Oct 5 06:02:17 localhost ovn_controller[157556]: 2025-10-05T10:02:17Z|00049|binding|INFO|Setting lport ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 up in Southbound Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost journal[237639]: ethtool ioctl error on tapba67cae5-6a: No such device Oct 5 06:02:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 145 MiB data, 733 MiB used, 41 GiB / 42 GiB avail; 115 KiB/s rd, 53 KiB/s wr, 110 op/s Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.879 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.880 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.900 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.925 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:02:17 localhost nova_compute[297130]: 2025-10-05 10:02:17.958 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:18 localhost podman[322653]: Oct 5 06:02:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/452474209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:18 localhost podman[322653]: 2025-10-05 10:02:18.38255498 +0000 UTC m=+0.089494298 container create 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:18 localhost nova_compute[297130]: 2025-10-05 10:02:18.408 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:18 localhost nova_compute[297130]: 2025-10-05 10:02:18.415 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:02:18 localhost systemd[1]: Started libpod-conmon-382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b.scope. Oct 5 06:02:18 localhost nova_compute[297130]: 2025-10-05 10:02:18.430 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:02:18 localhost systemd[1]: Started libcrun container. Oct 5 06:02:18 localhost podman[322653]: 2025-10-05 10:02:18.338508905 +0000 UTC m=+0.045448263 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:02:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf68057b3224cd0d95b97370dcfd6c9c177b45856feb5f16aef08acd4cc55fc6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:18 localhost podman[322653]: 2025-10-05 10:02:18.446990351 +0000 UTC m=+0.153929669 container init 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:02:18 localhost podman[322653]: 2025-10-05 10:02:18.455669248 +0000 UTC m=+0.162608536 container start 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:18 localhost dnsmasq[322674]: started, version 2.85 cachesize 150 Oct 5 06:02:18 localhost dnsmasq[322674]: DNS service limited to local subnets Oct 5 06:02:18 localhost dnsmasq[322674]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:02:18 localhost dnsmasq[322674]: warning: no upstream servers configured Oct 5 06:02:18 localhost dnsmasq-dhcp[322674]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:02:18 localhost dnsmasq[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/addn_hosts - 0 addresses Oct 5 06:02:18 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/host Oct 5 06:02:18 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/opts Oct 5 06:02:18 localhost nova_compute[297130]: 2025-10-05 10:02:18.500 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:02:18 localhost nova_compute[297130]: 2025-10-05 10:02:18.501 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:18 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:18.634 271653 INFO neutron.agent.dhcp.agent [None req-7f83ec75-4371-49b9-a76e-2f8a77d36b28 - - - - - -] DHCP configuration for ports {'5a563f2e-fead-4b02-93b6-f11a2dcfb692'} is completed#033[00m Oct 5 06:02:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.065 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.065 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.080 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 5 06:02:19 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:19.099 2 INFO neutron.agent.securitygroups_rpc [None req-752332cd-f9cf-4a7f-a2cb-f938e53511d4 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:02:19 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:19.102 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:18Z, description=, device_id=7030e095-2ccf-40ab-a06b-e80712006a75, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a78500a6-43c3-4526-84b0-948c29a8cfbf, ip_allocation=immediate, mac_address=fa:16:3e:26:be:53, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=529, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:02:18Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.186 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.187 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.192 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.194 2 INFO nova.compute.claims [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Claim successful on node np0005471152.localdomain#033[00m Oct 5 06:02:19 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 6 addresses Oct 5 06:02:19 localhost podman[322692]: 2025-10-05 10:02:19.327997953 +0000 UTC m=+0.059614391 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:02:19 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:19 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.341 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 145 MiB data, 733 MiB used, 41 GiB / 42 GiB avail; 115 KiB/s rd, 53 KiB/s wr, 110 op/s Oct 5 06:02:19 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:19.589 271653 INFO neutron.agent.dhcp.agent [None req-dfc6902b-0b2b-493e-9c20-40519ecbbfb8 - - - - - -] DHCP configuration for ports {'a78500a6-43c3-4526-84b0-948c29a8cfbf'} is completed#033[00m Oct 5 06:02:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3726704192' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.784 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.791 2 DEBUG nova.compute.provider_tree [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.814 2 DEBUG nova.scheduler.client.report [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.845 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.847 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.906 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.907 2 DEBUG nova.network.neutron [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.922 2 INFO nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Oct 5 06:02:19 localhost nova_compute[297130]: 2025-10-05 10:02:19.942 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Oct 5 06:02:20 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:20.022 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:19Z, description=, device_id=6fe59186-19e2-43ea-bad6-a5233a034471, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=43f4fcbb-be0f-4c36-a9ed-c4d3b8b866d5, ip_allocation=immediate, mac_address=fa:16:3e:75:1c:dc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=536, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:02:19Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.043 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.046 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.047 2 INFO nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Creating image(s)#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.084 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.133 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.173 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.185 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "b315ad9cd7995c7800ecf94222a7c08b7e34bf34" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:20 localhost nova_compute[297130]: 2025-10-05 10:02:20.186 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b315ad9cd7995c7800ecf94222a7c08b7e34bf34" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:20 localhost podman[322806]: 2025-10-05 10:02:20.27100159 +0000 UTC m=+0.060877956 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:02:20 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 7 addresses Oct 5 06:02:20 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:20 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:20.400 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:20.401 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:20.402 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:20 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:20.547 271653 INFO neutron.agent.dhcp.agent [None req-76e23f76-33e0-4e7d-9473-11f21b0965de - - - - - -] DHCP configuration for ports {'43f4fcbb-be0f-4c36-a9ed-c4d3b8b866d5'} is completed#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.102 2 DEBUG nova.virt.libvirt.imagebackend [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Image locations are: [{'url': 'rbd://659062ac-50b4-5607-b699-3105da7f55ee/images/6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://659062ac-50b4-5607-b699-3105da7f55ee/images/6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Oct 5 06:02:21 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:21.241 2 INFO neutron.agent.securitygroups_rpc [None req-ca79af68-b363-4752-a92a-9b5fd7e9378c 2c39388980e04b87a9a048001f9e1b0b ca79c6dd41f44883b5382141d131a288 - - default default] Security group member updated ['c0bd513c-388e-4362-8f22-2404d7744c8b']#033[00m Oct 5 06:02:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 e92: 6 total, 6 up, 6 in Oct 5 06:02:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 145 MiB data, 733 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 27 KiB/s wr, 105 op/s Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.630 2 WARNING oslo_policy.policy [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.631 2 WARNING oslo_policy.policy [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.633 2 DEBUG nova.policy [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Oct 5 06:02:21 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:21.676 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:21Z, description=, device_id=6fe59186-19e2-43ea-bad6-a5233a034471, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=658b1d81-e7e9-4a1b-b15d-f3ba865fd4dd, ip_allocation=immediate, mac_address=fa:16:3e:78:5c:99, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:02:15Z, description=, dns_domain=, id=8b0fb53c-a380-4532-8d67-7340b4a78d0a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-1621400584-network, port_security_enabled=True, project_id=88e675d94b7c464fab0695a788e43e9b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56428, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=516, status=ACTIVE, subnets=['5f099efd-ba1e-48e8-a7f7-1cf1d2d75dae'], tags=[], tenant_id=88e675d94b7c464fab0695a788e43e9b, updated_at=2025-10-05T10:02:16Z, vlan_transparent=None, network_id=8b0fb53c-a380-4532-8d67-7340b4a78d0a, port_security_enabled=False, project_id=88e675d94b7c464fab0695a788e43e9b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=538, status=DOWN, tags=[], tenant_id=88e675d94b7c464fab0695a788e43e9b, updated_at=2025-10-05T10:02:21Z on network 8b0fb53c-a380-4532-8d67-7340b4a78d0a#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.884 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:21 localhost podman[322879]: 2025-10-05 10:02:21.931223453 +0000 UTC m=+0.060698491 container kill 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:02:21 localhost dnsmasq[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/addn_hosts - 1 addresses Oct 5 06:02:21 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/host Oct 5 06:02:21 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/opts Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.952 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.954 2 DEBUG nova.virt.images [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] 6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.955 2 DEBUG nova.privsep.utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Oct 5 06:02:21 localhost nova_compute[297130]: 2025-10-05 10:02:21.956 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.part /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:22 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:22.098 271653 INFO neutron.agent.dhcp.agent [None req-8852bfa4-60be-4b83-a7d2-1bb13ab6d0fe - - - - - -] DHCP configuration for ports {'658b1d81-e7e9-4a1b-b15d-f3ba865fd4dd'} is completed#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.121 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.part /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.converted" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.126 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.193 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34.converted --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.195 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b315ad9cd7995c7800ecf94222a7c08b7e34bf34" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 2.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.229 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.233 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34 b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:02:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:02:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:02:22 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:02:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:02:22 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 2a546b5c-20ce-4302-a0a2-2ce7b3b4f704 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:02:22 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 2a546b5c-20ce-4302-a0a2-2ce7b3b4f704 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:02:22 localhost ceph-mgr[301363]: [progress INFO root] Completed event 2a546b5c-20ce-4302-a0a2-2ce7b3b4f704 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:02:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:02:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.803 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34 b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:22 localhost nova_compute[297130]: 2025-10-05 10:02:22.894 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] resizing rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Oct 5 06:02:23 localhost nova_compute[297130]: 2025-10-05 10:02:23.052 2 DEBUG nova.objects.instance [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lazy-loading 'migration_context' on Instance uuid b1dce7a2-b06b-4cdb-b072-ccd123742ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:02:23 localhost nova_compute[297130]: 2025-10-05 10:02:23.070 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 5 06:02:23 localhost nova_compute[297130]: 2025-10-05 10:02:23.071 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Ensure instance console log exists: /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 5 06:02:23 localhost nova_compute[297130]: 2025-10-05 10:02:23.071 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:23 localhost nova_compute[297130]: 2025-10-05 10:02:23.072 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:23 localhost nova_compute[297130]: 2025-10-05 10:02:23.072 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:23 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:23.306 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:23 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:23.309 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:02:23 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:02:23 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:02:23 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:23.518 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:21Z, description=, device_id=6fe59186-19e2-43ea-bad6-a5233a034471, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=658b1d81-e7e9-4a1b-b15d-f3ba865fd4dd, ip_allocation=immediate, mac_address=fa:16:3e:78:5c:99, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:02:15Z, description=, dns_domain=, id=8b0fb53c-a380-4532-8d67-7340b4a78d0a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-1621400584-network, port_security_enabled=True, project_id=88e675d94b7c464fab0695a788e43e9b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56428, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=516, status=ACTIVE, subnets=['5f099efd-ba1e-48e8-a7f7-1cf1d2d75dae'], tags=[], tenant_id=88e675d94b7c464fab0695a788e43e9b, updated_at=2025-10-05T10:02:16Z, vlan_transparent=None, network_id=8b0fb53c-a380-4532-8d67-7340b4a78d0a, port_security_enabled=False, project_id=88e675d94b7c464fab0695a788e43e9b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=538, status=DOWN, tags=[], tenant_id=88e675d94b7c464fab0695a788e43e9b, updated_at=2025-10-05T10:02:21Z on network 8b0fb53c-a380-4532-8d67-7340b4a78d0a#033[00m Oct 5 06:02:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 177 active+clean; 156 MiB data, 728 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 218 KiB/s wr, 116 op/s Oct 5 06:02:23 localhost dnsmasq[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/addn_hosts - 1 addresses Oct 5 06:02:23 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/host Oct 5 06:02:23 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/opts Oct 5 06:02:23 localhost podman[323088]: 2025-10-05 10:02:23.748693342 +0000 UTC m=+0.060637148 container kill 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:02:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:24 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:24.130 271653 INFO neutron.agent.dhcp.agent [None req-87b33fe9-7dd7-46be-9d27-a22ccaddafdf - - - - - -] DHCP configuration for ports {'658b1d81-e7e9-4a1b-b15d-f3ba865fd4dd'} is completed#033[00m Oct 5 06:02:24 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:24.311 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.388 2 DEBUG nova.network.neutron [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Successfully updated port: 1374da87-a9a5-4840-80a7-197494b76131 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.405 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.406 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquired lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.406 2 DEBUG nova.network.neutron [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.523 2 DEBUG nova.compute.manager [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-changed-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.524 2 DEBUG nova.compute.manager [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Refreshing instance network info cache due to event network-changed-1374da87-a9a5-4840-80a7-197494b76131. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.524 2 DEBUG oslo_concurrency.lockutils [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:24 localhost nova_compute[297130]: 2025-10-05 10:02:24.565 2 DEBUG nova.network.neutron [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.085 2 DEBUG nova.network.neutron [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating instance_info_cache with network_info: [{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.109 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Releasing lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.110 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Instance network_info: |[{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.110 2 DEBUG oslo_concurrency.lockutils [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquired lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.111 2 DEBUG nova.network.neutron [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Refreshing network info cache for port 1374da87-a9a5-4840-80a7-197494b76131 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.116 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Start _get_guest_xml network_info=[{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-05T10:00:39Z,direct_url=,disk_format='qcow2',id=6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b36437b65444bcdac75beef77b6981e',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-05T10:00:40Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.123 2 WARNING nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.131 2 DEBUG nova.virt.libvirt.host [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Searching host: 'np0005471152.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.131 2 DEBUG nova.virt.libvirt.host [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.134 2 DEBUG nova.virt.libvirt.host [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Searching host: 'np0005471152.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.134 2 DEBUG nova.virt.libvirt.host [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.135 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.135 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-05T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='97ddc44b-feec-4b28-874c-024e6ebcea56',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-05T10:00:39Z,direct_url=,disk_format='qcow2',id=6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b36437b65444bcdac75beef77b6981e',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-05T10:00:40Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.136 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.136 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.137 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.137 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.137 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.138 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.138 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.138 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.139 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.139 2 DEBUG nova.virt.hardware [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.144 2 DEBUG nova.privsep.utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.145 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:25 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 6 addresses Oct 5 06:02:25 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:25 localhost podman[323144]: 2025-10-05 10:02:25.467553947 +0000 UTC m=+0.095918623 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:02:25 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 156 MiB data, 728 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 192 KiB/s wr, 102 op/s Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.672 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.709 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:25 localhost nova_compute[297130]: 2025-10-05 10:02:25.714 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:26 localhost podman[248157]: time="2025-10-05T10:02:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:02:26 localhost podman[248157]: @ - - [05/Oct/2025:10:02:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149963 "" "Go-http-client/1.1" Oct 5 06:02:26 localhost podman[248157]: @ - - [05/Oct/2025:10:02:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20266 "" "Go-http-client/1.1" Oct 5 06:02:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:02:26 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1261151470' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.160 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.162 2 DEBUG nova.virt.libvirt.vif [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2001023684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2001023684',id=7,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b069d6351214d1baf4ff391a6512beb',ramdisk_id='',reservation_id='r-k8v41bv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1030348059',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-05T10:02:19Z,user_data=None,user_id='b56f1071781246a68c1693519a9cd054',uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.162 2 DEBUG nova.network.os_vif_util [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Converting VIF {"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.164 2 DEBUG nova.network.os_vif_util [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.167 2 DEBUG nova.objects.instance [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lazy-loading 'pci_devices' on Instance uuid b1dce7a2-b06b-4cdb-b072-ccd123742ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.191 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] End _get_guest_xml xml= Oct 5 06:02:26 localhost nova_compute[297130]: b1dce7a2-b06b-4cdb-b072-ccd123742ded Oct 5 06:02:26 localhost nova_compute[297130]: instance-00000007 Oct 5 06:02:26 localhost nova_compute[297130]: 131072 Oct 5 06:02:26 localhost nova_compute[297130]: 1 Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: tempest-LiveAutoBlockMigrationV225Test-server-2001023684 Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:25 Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: 128 Oct 5 06:02:26 localhost nova_compute[297130]: 1 Oct 5 06:02:26 localhost nova_compute[297130]: 0 Oct 5 06:02:26 localhost nova_compute[297130]: 0 Oct 5 06:02:26 localhost nova_compute[297130]: 1 Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member Oct 5 06:02:26 localhost nova_compute[297130]: tempest-LiveAutoBlockMigrationV225Test-1030348059 Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: RDO Oct 5 06:02:26 localhost nova_compute[297130]: OpenStack Compute Oct 5 06:02:26 localhost nova_compute[297130]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 5 06:02:26 localhost nova_compute[297130]: b1dce7a2-b06b-4cdb-b072-ccd123742ded Oct 5 06:02:26 localhost nova_compute[297130]: b1dce7a2-b06b-4cdb-b072-ccd123742ded Oct 5 06:02:26 localhost nova_compute[297130]: Virtual Machine Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: hvm Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: /dev/urandom Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: Oct 5 06:02:26 localhost nova_compute[297130]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.193 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Preparing to wait for external event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.193 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.194 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.194 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.195 2 DEBUG nova.virt.libvirt.vif [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2001023684',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2001023684',id=7,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b069d6351214d1baf4ff391a6512beb',ramdisk_id='',reservation_id='r-k8v41bv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1030348059',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-05T10:02:19Z,user_data=None,user_id='b56f1071781246a68c1693519a9cd054',uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.196 2 DEBUG nova.network.os_vif_util [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Converting VIF {"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.197 2 DEBUG nova.network.os_vif_util [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.198 2 DEBUG os_vif [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.250 2 DEBUG ovsdbapp.backend.ovs_idl [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.251 2 DEBUG ovsdbapp.backend.ovs_idl [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.251 2 DEBUG ovsdbapp.backend.ovs_idl [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.289 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.290 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.291 2 INFO oslo.privsep.daemon [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp7jbhbq09/privsep.sock']#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.306 2 DEBUG nova.network.neutron [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updated VIF entry in instance network info cache for port 1374da87-a9a5-4840-80a7-197494b76131. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.307 2 DEBUG nova.network.neutron [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating instance_info_cache with network_info: [{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.336 2 DEBUG oslo_concurrency.lockutils [req-56cec590-8a4f-4b2d-990d-b22723ae590a req-395463a3-8f28-41b1-b7a2-69a176958dca 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Releasing lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:26 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:02:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.971 2 INFO oslo.privsep.daemon [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Spawned new privsep daemon via rootwrap#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.855 1004 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.860 1004 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.864 1004 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Oct 5 06:02:26 localhost nova_compute[297130]: 2025-10-05 10:02:26.864 1004 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1004#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.257 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1374da87-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1374da87-a9, col_values=(('external_ids', {'iface-id': '1374da87-a9a5-4840-80a7-197494b76131', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:06:97', 'vm-uuid': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.302 2 INFO os_vif [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9')#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.355 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.356 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.356 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] No VIF found with MAC fa:16:3e:4b:06:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.358 2 INFO nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Using config drive#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.399 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.526 2 INFO nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Creating config drive at /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.config#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.532 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoazoye03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 192 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 40 op/s Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.660 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoazoye03" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.703 2 DEBUG nova.storage.rbd_utils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] rbd image b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.709 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.config b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:02:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:02:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.925 2 DEBUG oslo_concurrency.processutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.config b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:27 localhost nova_compute[297130]: 2025-10-05 10:02:27.927 2 INFO nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Deleting local config drive /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.config because it was imported into RBD.#033[00m Oct 5 06:02:27 localhost podman[323275]: 2025-10-05 10:02:27.945038718 +0000 UTC m=+0.102024369 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:02:27 localhost systemd[1]: Starting libvirt secret daemon... Oct 5 06:02:27 localhost podman[323275]: 2025-10-05 10:02:27.955214787 +0000 UTC m=+0.112200478 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:02:27 localhost podman[323274]: 2025-10-05 10:02:27.906868956 +0000 UTC m=+0.071708002 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Oct 5 06:02:27 localhost systemd[1]: Started libvirt secret daemon. Oct 5 06:02:27 localhost podman[323274]: 2025-10-05 10:02:27.98681714 +0000 UTC m=+0.151656216 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Oct 5 06:02:27 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:02:28 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:02:28 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Oct 5 06:02:28 localhost kernel: device tap1374da87-a9 entered promiscuous mode Oct 5 06:02:28 localhost NetworkManager[5970]: [1759658548.0427] manager: (tap1374da87-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/17) Oct 5 06:02:28 localhost systemd-udevd[323346]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00050|binding|INFO|Claiming lport 1374da87-a9a5-4840-80a7-197494b76131 for this chassis. Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00051|binding|INFO|1374da87-a9a5-4840-80a7-197494b76131: Claiming fa:16:3e:4b:06:97 10.100.0.12 Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00052|binding|INFO|Claiming lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee for this chassis. Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00053|binding|INFO|3fa04c44-9142-4d6c-991f-aca11ea8e8ee: Claiming fa:16:3e:ce:90:0e 19.80.0.175 Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.065 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:90:0e 19.80.0.175'], port_security=['fa:16:3e:ce:90:0e 19.80.0.175'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['1374da87-a9a5-4840-80a7-197494b76131'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-973969040', 'neutron:cidrs': '19.80.0.175/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-973969040', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c80697f7-3043-40b9-ba7e-9e4d45b917f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3fa04c44-9142-4d6c-991f-aca11ea8e8ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:28 localhost NetworkManager[5970]: [1759658548.0662] device (tap1374da87-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 06:02:28 localhost NetworkManager[5970]: [1759658548.0669] device (tap1374da87-a9): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.068 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:06:97 10.100.0.12'], port_security=['fa:16:3e:4b:06:97 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-738433439', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9493e121-6caf-4009-9106-31c87685c480', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-738433439', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0269f0ba-15e7-46b3-9fe6-9a4bc91e9d33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1374da87-a9a5-4840-80a7-197494b76131) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.069 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 3fa04c44-9142-4d6c-991f-aca11ea8e8ee in datapath 3b6dd988-c148-4dbf-ae5b-dba073193ccc bound to our chassis#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.073 163201 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b6dd988-c148-4dbf-ae5b-dba073193ccc#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00054|binding|INFO|Setting lport 1374da87-a9a5-4840-80a7-197494b76131 ovn-installed in OVS Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00055|binding|INFO|Setting lport 1374da87-a9a5-4840-80a7-197494b76131 up in Southbound Oct 5 06:02:28 localhost ovn_controller[157556]: 2025-10-05T10:02:28Z|00056|binding|INFO|Setting lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee up in Southbound Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:28 localhost systemd-machined[206743]: New machine qemu-1-instance-00000007. Oct 5 06:02:28 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000007. Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.531 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[7d976887-4964-4eae-aa45-58c2ac212996]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.532 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b6dd988-c1 in ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.534 271895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b6dd988-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.534 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a12d2e-29a7-4367-9e39-f77e5e4656ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.535 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[688338f1-1fa9-4385-a3ad-baca06049e74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.555 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[e840c343-2d1d-4cf3-839b-0a55aa5a126a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.577 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[730d6c1b-5164-416f-8027-47770902e302]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:28.580 163201 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpsuc142xp/privsep.sock']#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.658 2 DEBUG nova.compute.manager [req-08a4a44e-960d-4421-94d0-bc7c45d8bc3f req-104c8e65-48f1-46ff-bff5-4784c2b84f53 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.659 2 DEBUG oslo_concurrency.lockutils [req-08a4a44e-960d-4421-94d0-bc7c45d8bc3f req-104c8e65-48f1-46ff-bff5-4784c2b84f53 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.659 2 DEBUG oslo_concurrency.lockutils [req-08a4a44e-960d-4421-94d0-bc7c45d8bc3f req-104c8e65-48f1-46ff-bff5-4784c2b84f53 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.659 2 DEBUG oslo_concurrency.lockutils [req-08a4a44e-960d-4421-94d0-bc7c45d8bc3f req-104c8e65-48f1-46ff-bff5-4784c2b84f53 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.660 2 DEBUG nova.compute.manager [req-08a4a44e-960d-4421-94d0-bc7c45d8bc3f req-104c8e65-48f1-46ff-bff5-4784c2b84f53 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Processing event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.746 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.747 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.747 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Started (Lifecycle Event)#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.750 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.753 2 INFO nova.virt.libvirt.driver [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Instance spawned successfully.#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.753 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.764 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.781 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.784 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.784 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.784 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.784 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.785 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.785 2 DEBUG nova.virt.libvirt.driver [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.817 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.817 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.817 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Paused (Lifecycle Event)#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.841 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.844 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.844 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Resumed (Lifecycle Event)#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.864 2 INFO nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Took 8.82 seconds to spawn the instance on the hypervisor.#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.864 2 DEBUG nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.865 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.870 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.906 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 5 06:02:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.941 2 INFO nova.compute.manager [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Took 9.80 seconds to build instance.#033[00m Oct 5 06:02:28 localhost nova_compute[297130]: 2025-10-05 10:02:28.971 2 DEBUG oslo_concurrency.lockutils [None req-66b0bef0-f9e3-44eb-b31d-c3904c18d4a1 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 9.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.222 163201 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.224 163201 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsuc142xp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.139 323411 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.144 323411 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.148 323411 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.148 323411 INFO oslo.privsep.daemon [-] privsep daemon running as pid 323411#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.228 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[f6549d77-0d38-4664-a962-6492f2c6c3d6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 192 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 40 op/s Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.714 323411 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.714 323411 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:29 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:29.714 323411 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:29 localhost systemd[1]: tmp-crun.ZaO5yS.mount: Deactivated successfully. Oct 5 06:02:29 localhost dnsmasq[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/addn_hosts - 0 addresses Oct 5 06:02:29 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/host Oct 5 06:02:29 localhost dnsmasq-dhcp[322674]: read /var/lib/neutron/dhcp/8b0fb53c-a380-4532-8d67-7340b4a78d0a/opts Oct 5 06:02:29 localhost podman[323433]: 2025-10-05 10:02:29.899617797 +0000 UTC m=+0.108723613 container kill 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.291 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[b82f5ae3-cad9-49c3-b326-9a7a20d4a2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost NetworkManager[5970]: [1759658550.3278] manager: (tap3b6dd988-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/18) Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.327 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3fb4b6-fb91-4a63-9f46-b52efbf180ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost kernel: device tapba67cae5-6a left promiscuous mode Oct 5 06:02:30 localhost ovn_controller[157556]: 2025-10-05T10:02:30Z|00057|binding|INFO|Releasing lport ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 from this chassis (sb_readonly=0) Oct 5 06:02:30 localhost ovn_controller[157556]: 2025-10-05T10:02:30Z|00058|binding|INFO|Setting lport ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 down in Southbound Oct 5 06:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.358 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-8b0fb53c-a380-4532-8d67-7340b4a78d0a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b0fb53c-a380-4532-8d67-7340b4a78d0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88e675d94b7c464fab0695a788e43e9b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c80bcdd5-77a2-46a7-af24-2da29c0bd139, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.377 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[4f42972f-b601-4fa3-9598-054137782f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.382 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5069b6-6b0b-45bc-b8da-9ebbda4c7d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3b6dd988-c1: link becomes ready Oct 5 06:02:30 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3b6dd988-c0: link becomes ready Oct 5 06:02:30 localhost NetworkManager[5970]: [1759658550.4142] device (tap3b6dd988-c0): carrier: link connected Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.418 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[77f3b2b7-481d-4fc8-b3ff-60847728f0d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.449 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[bebef40a-9234-4d5e-8530-d5665e73fd5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b6dd988-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:22:5c:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1202648, 'reachable_time': 29791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323491, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.469 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[92a8ca58-cca2-47f9-b0c8-533f7a6699a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:5c84'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1202648, 'tstamp': 1202648}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323493, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.492 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[392f7408-dfd0-4965-a13a-08222bd5576f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b6dd988-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:22:5c:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1202648, 'reachable_time': 29791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323497, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost podman[323473]: 2025-10-05 10:02:30.495933567 +0000 UTC m=+0.118984044 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter) Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.533 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[31cfb156-5844-445a-9773-c2ed83e356d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost podman[323473]: 2025-10-05 10:02:30.538203142 +0000 UTC m=+0.161253579 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:02:30 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.604 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9d188c6a-289f-4fe9-8916-649b9809803a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.606 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b6dd988-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.607 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.607 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b6dd988-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost kernel: device tap3b6dd988-c0 entered promiscuous mode Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.619 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b6dd988-c0, col_values=(('external_ids', {'iface-id': 'bac74788-cacd-4240-bc16-90e5547e0313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost ovn_controller[157556]: 2025-10-05T10:02:30Z|00059|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.626 163201 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b6dd988-c148-4dbf-ae5b-dba073193ccc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b6dd988-c148-4dbf-ae5b-dba073193ccc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.627 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[bc50cf82-5dba-427c-b46f-862f814179c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.630 163201 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: global Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: log /dev/log local0 debug Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: log-tag haproxy-metadata-proxy-3b6dd988-c148-4dbf-ae5b-dba073193ccc Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: user root Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: group root Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: maxconn 1024 Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: pidfile /var/lib/neutron/external/pids/3b6dd988-c148-4dbf-ae5b-dba073193ccc.pid.haproxy Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: daemon Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: defaults Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: log global Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: mode http Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: option httplog Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: option dontlognull Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: option http-server-close Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: option forwardfor Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: retries 3 Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: timeout http-request 30s Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: timeout connect 30s Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: timeout client 32s Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: timeout server 32s Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: timeout http-keep-alive 30s Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: listen listener Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: bind 169.254.169.254:80 Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: http-request add-header X-OVN-Network-ID 3b6dd988-c148-4dbf-ae5b-dba073193ccc Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:02:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:30.631 163201 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'env', 'PROCESS_TAG=haproxy-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b6dd988-c148-4dbf-ae5b-dba073193ccc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.721 2 DEBUG nova.compute.manager [req-b3b30ed0-fcc3-41ed-8ca5-9b8507674f04 req-3a1474d1-3fc2-4dfc-b166-fabf87eaf1c6 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.722 2 DEBUG oslo_concurrency.lockutils [req-b3b30ed0-fcc3-41ed-8ca5-9b8507674f04 req-3a1474d1-3fc2-4dfc-b166-fabf87eaf1c6 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.722 2 DEBUG oslo_concurrency.lockutils [req-b3b30ed0-fcc3-41ed-8ca5-9b8507674f04 req-3a1474d1-3fc2-4dfc-b166-fabf87eaf1c6 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.723 2 DEBUG oslo_concurrency.lockutils [req-b3b30ed0-fcc3-41ed-8ca5-9b8507674f04 req-3a1474d1-3fc2-4dfc-b166-fabf87eaf1c6 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.723 2 DEBUG nova.compute.manager [req-b3b30ed0-fcc3-41ed-8ca5-9b8507674f04 req-3a1474d1-3fc2-4dfc-b166-fabf87eaf1c6 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:30 localhost nova_compute[297130]: 2025-10-05 10:02:30.724 2 WARNING nova.compute.manager [req-b3b30ed0-fcc3-41ed-8ca5-9b8507674f04 req-3a1474d1-3fc2-4dfc-b166-fabf87eaf1c6 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received unexpected event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with vm_state active and task_state None.#033[00m Oct 5 06:02:31 localhost podman[323533]: Oct 5 06:02:31 localhost podman[323533]: 2025-10-05 10:02:31.222198159 +0000 UTC m=+0.120090774 container create 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:02:31 localhost podman[323533]: 2025-10-05 10:02:31.170658991 +0000 UTC m=+0.068551646 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:02:31 localhost systemd[1]: Started libpod-conmon-9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519.scope. Oct 5 06:02:31 localhost systemd[1]: tmp-crun.p5c467.mount: Deactivated successfully. Oct 5 06:02:31 localhost systemd[1]: Started libcrun container. Oct 5 06:02:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ed2e7a0bf04b553ef494a49479d568c87b045656707d19d52fca310600b72b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:31 localhost podman[323533]: 2025-10-05 10:02:31.338049275 +0000 UTC m=+0.235941890 container init 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:02:31 localhost podman[323533]: 2025-10-05 10:02:31.348630555 +0000 UTC m=+0.246523180 container start 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:31 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [NOTICE] (323552) : New worker (323554) forked Oct 5 06:02:31 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [NOTICE] (323552) : Loading success. Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.419 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 1374da87-a9a5-4840-80a7-197494b76131 in datapath 9493e121-6caf-4009-9106-31c87685c480 unbound from our chassis#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.428 163201 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9493e121-6caf-4009-9106-31c87685c480#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.439 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[aed57112-2cd4-472a-9a04-36d14d60a3e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.441 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9493e121-61 in ovnmeta-9493e121-6caf-4009-9106-31c87685c480 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.445 271895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9493e121-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.446 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d191ed-0db9-4bb8-b939-acfad7455b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.448 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[de52f213-3e1b-4281-a992-2027367d3f03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.471 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4109b4-cc4e-4ee3-9b4b-538b64d5c7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.496 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ccbd28-3968-42a6-8c8f-8da59c0edce3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.527 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbf2d09-a1ba-4241-b19b-8e462e8eb0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.535 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[333a3eee-17a7-4708-8c4c-5095f1c7f9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost NetworkManager[5970]: [1759658551.5369] manager: (tap9493e121-60): new Veth device (/org/freedesktop/NetworkManager/Devices/19) Oct 5 06:02:31 localhost systemd-udevd[323474]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:02:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 192 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 40 op/s Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.573 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[3293aefb-bfa2-46bf-baf2-49270e19506f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.577 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[ea62e8a3-8d96-47be-852f-132f88758a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9493e121-61: link becomes ready Oct 5 06:02:31 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap9493e121-60: link becomes ready Oct 5 06:02:31 localhost NetworkManager[5970]: [1759658551.6081] device (tap9493e121-60): carrier: link connected Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.614 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d6a2d0-4cab-4aa1-9b81-44cb68551845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.634 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[55ec2546-4160-461c-97f9-2dd5f6f627fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9493e121-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a2:ce:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1202767, 'reachable_time': 20791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323574, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.653 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[14b2bba8-f3ff-4920-9a3b-84fc458f1648]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:ceee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1202767, 'tstamp': 1202767}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323575, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.675 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3850d273-940c-4614-8ff8-65b5edec9401]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9493e121-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:a2:ce:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1202767, 'reachable_time': 20791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323576, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.710 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d12fcd-0210-416f-affe-6fb9186d5688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.775 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9d895327-519e-4a6f-bf6c-31f2ff802d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.784 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9493e121-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.785 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.786 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9493e121-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:31 localhost nova_compute[297130]: 2025-10-05 10:02:31.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:31 localhost kernel: device tap9493e121-60 entered promiscuous mode Oct 5 06:02:31 localhost nova_compute[297130]: 2025-10-05 10:02:31.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.796 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9493e121-60, col_values=(('external_ids', {'iface-id': '3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:31 localhost nova_compute[297130]: 2025-10-05 10:02:31.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:31 localhost ovn_controller[157556]: 2025-10-05T10:02:31Z|00060|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.804 163201 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9493e121-6caf-4009-9106-31c87685c480.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9493e121-6caf-4009-9106-31c87685c480.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.805 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e89f74-8ebd-4729-b823-587963225bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.808 163201 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: global Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: log /dev/log local0 debug Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: log-tag haproxy-metadata-proxy-9493e121-6caf-4009-9106-31c87685c480 Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: user root Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: group root Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: maxconn 1024 Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: pidfile /var/lib/neutron/external/pids/9493e121-6caf-4009-9106-31c87685c480.pid.haproxy Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: daemon Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: defaults Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: log global Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: mode http Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: option httplog Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: option dontlognull Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: option http-server-close Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: option forwardfor Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: retries 3 Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: timeout http-request 30s Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: timeout connect 30s Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: timeout client 32s Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: timeout server 32s Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: timeout http-keep-alive 30s Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: listen listener Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: bind 169.254.169.254:80 Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: http-request add-header X-OVN-Network-ID 9493e121-6caf-4009-9106-31c87685c480 Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:02:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:31.810 163201 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'env', 'PROCESS_TAG=haproxy-9493e121-6caf-4009-9106-31c87685c480', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9493e121-6caf-4009-9106-31c87685c480.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:02:31 localhost nova_compute[297130]: 2025-10-05 10:02:31.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:32 localhost podman[323609]: Oct 5 06:02:32 localhost podman[323609]: 2025-10-05 10:02:32.290047518 +0000 UTC m=+0.112188567 container create f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:32 localhost nova_compute[297130]: 2025-10-05 10:02:32.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:32 localhost podman[323609]: 2025-10-05 10:02:32.237333137 +0000 UTC m=+0.059474286 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:02:32 localhost systemd[1]: Started libpod-conmon-f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8.scope. Oct 5 06:02:32 localhost systemd[1]: Started libcrun container. Oct 5 06:02:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a670a74024f4ca1ba69bdb9356fc4c8b47f6b88b6007d6e1fab4614293754e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:02:32 localhost podman[323609]: 2025-10-05 10:02:32.366969601 +0000 UTC m=+0.189110660 container init f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:02:32 localhost podman[323609]: 2025-10-05 10:02:32.375918916 +0000 UTC m=+0.198059985 container start f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:02:32 localhost nova_compute[297130]: 2025-10-05 10:02:32.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:32 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [NOTICE] (323627) : New worker (323629) forked Oct 5 06:02:32 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [NOTICE] (323627) : Loading success. Oct 5 06:02:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:32.441 163201 INFO neutron.agent.ovn.metadata.agent [-] Port ba67cae5-6ac3-47b0-b591-9bfe2a94b8b2 in datapath 8b0fb53c-a380-4532-8d67-7340b4a78d0a unbound from our chassis#033[00m Oct 5 06:02:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:32.446 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b0fb53c-a380-4532-8d67-7340b4a78d0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:32.448 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[2de8115a-c2f9-4288-865d-666005d305b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:32 localhost podman[323655]: 2025-10-05 10:02:32.834419649 +0000 UTC m=+0.072542984 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:02:32 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 5 addresses Oct 5 06:02:32 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:32 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:33 localhost ovn_controller[157556]: 2025-10-05T10:02:33Z|00061|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:33 localhost ovn_controller[157556]: 2025-10-05T10:02:33Z|00062|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:33 localhost nova_compute[297130]: 2025-10-05 10:02:33.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:02:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:02:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 192 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s Oct 5 06:02:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:35 localhost systemd[1]: tmp-crun.SCDOQS.mount: Deactivated successfully. Oct 5 06:02:35 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:02:35 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:35 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:35 localhost podman[323694]: 2025-10-05 10:02:35.123630394 +0000 UTC m=+0.075192947 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:02:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 192 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 91 op/s Oct 5 06:02:35 localhost ovn_controller[157556]: 2025-10-05T10:02:35Z|00063|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:35 localhost ovn_controller[157556]: 2025-10-05T10:02:35Z|00064|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:35 localhost nova_compute[297130]: 2025-10-05 10:02:35.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:36 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:02:36 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:36 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:36 localhost podman[323730]: 2025-10-05 10:02:36.111008513 +0000 UTC m=+0.047327364 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:02:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:02:36 localhost systemd[1]: tmp-crun.ltsEtI.mount: Deactivated successfully. Oct 5 06:02:36 localhost podman[323742]: 2025-10-05 10:02:36.230185881 +0000 UTC m=+0.097186257 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent) Oct 5 06:02:36 localhost podman[323742]: 2025-10-05 10:02:36.260585803 +0000 UTC m=+0.127586179 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:36 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:02:36 localhost ovn_controller[157556]: 2025-10-05T10:02:36Z|00065|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:36 localhost ovn_controller[157556]: 2025-10-05T10:02:36Z|00066|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:36 localhost nova_compute[297130]: 2025-10-05 10:02:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:36 localhost dnsmasq[322674]: exiting on receipt of SIGTERM Oct 5 06:02:36 localhost podman[323785]: 2025-10-05 10:02:36.915597786 +0000 UTC m=+0.065339676 container kill 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:02:36 localhost systemd[1]: libpod-382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b.scope: Deactivated successfully. Oct 5 06:02:36 localhost nova_compute[297130]: 2025-10-05 10:02:36.985 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Check if temp file /var/lib/nova/instances/tmprnxabv3g exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Oct 5 06:02:36 localhost nova_compute[297130]: 2025-10-05 10:02:36.986 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b1dce7a2-b06b-4cdb-b072-ccd123742ded',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Oct 5 06:02:36 localhost podman[323797]: 2025-10-05 10:02:36.994507554 +0000 UTC m=+0.066903280 container died 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:02:37 localhost podman[323797]: 2025-10-05 10:02:37.030172368 +0000 UTC m=+0.102568054 container cleanup 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:37 localhost systemd[1]: libpod-conmon-382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b.scope: Deactivated successfully. Oct 5 06:02:37 localhost podman[323805]: 2025-10-05 10:02:37.067297954 +0000 UTC m=+0.128028301 container remove 382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b0fb53c-a380-4532-8d67-7340b4a78d0a, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:37 localhost systemd[1]: tmp-crun.uQOuV9.mount: Deactivated successfully. Oct 5 06:02:37 localhost systemd[1]: var-lib-containers-storage-overlay-cf68057b3224cd0d95b97370dcfd6c9c177b45856feb5f16aef08acd4cc55fc6-merged.mount: Deactivated successfully. Oct 5 06:02:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-382caec8d374abd7866252894463bd7f7d9fc79bd9195f4ef452f9022674305b-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:37 localhost systemd[1]: run-netns-qdhcp\x2d8b0fb53c\x2da380\x2d4532\x2d8d67\x2d7340b4a78d0a.mount: Deactivated successfully. Oct 5 06:02:37 localhost nova_compute[297130]: 2025-10-05 10:02:37.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:37.375 271653 INFO neutron.agent.dhcp.agent [None req-fdc14c9a-fdfb-463a-beca-2ff87e7d522e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:37.376 271653 INFO neutron.agent.dhcp.agent [None req-fdc14c9a-fdfb-463a-beca-2ff87e7d522e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:37 localhost nova_compute[297130]: 2025-10-05 10:02:37.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 238 MiB data, 857 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 128 op/s Oct 5 06:02:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:37.605 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:02:37 localhost nova_compute[297130]: 2025-10-05 10:02:37.801 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:37 localhost nova_compute[297130]: 2025-10-05 10:02:37.802 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:37 localhost nova_compute[297130]: 2025-10-05 10:02:37.811 2 INFO nova.compute.rpcapi [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Oct 5 06:02:37 localhost nova_compute[297130]: 2025-10-05 10:02:37.812 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:37.918 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:37Z, description=, device_id=b1ce920c-beca-46e2-9ac1-6e2a09f8eaa6, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=00098456-36d5-4869-8993-9e0a1ab43e12, ip_allocation=immediate, mac_address=fa:16:3e:72:5f:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=590, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:02:37Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:02:38 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:02:38 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:38 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:38 localhost podman[323846]: 2025-10-05 10:02:38.136038837 +0000 UTC m=+0.061802730 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:02:38 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:02:38.402 271653 INFO neutron.agent.dhcp.agent [None req-81a04fdf-00e3-4043-b7cd-285c37854bcb - - - - - -] DHCP configuration for ports {'00098456-36d5-4869-8993-9e0a1ab43e12'} is completed#033[00m Oct 5 06:02:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 238 MiB data, 857 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.665 2 DEBUG nova.compute.manager [req-7c2dfea1-a76c-445a-a6d2-666b9c59c4f7 req-e0c48e7b-39ee-4a07-b15c-a91f66dff8ea 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.666 2 DEBUG oslo_concurrency.lockutils [req-7c2dfea1-a76c-445a-a6d2-666b9c59c4f7 req-e0c48e7b-39ee-4a07-b15c-a91f66dff8ea 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.667 2 DEBUG oslo_concurrency.lockutils [req-7c2dfea1-a76c-445a-a6d2-666b9c59c4f7 req-e0c48e7b-39ee-4a07-b15c-a91f66dff8ea 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.667 2 DEBUG oslo_concurrency.lockutils [req-7c2dfea1-a76c-445a-a6d2-666b9c59c4f7 req-e0c48e7b-39ee-4a07-b15c-a91f66dff8ea 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.668 2 DEBUG nova.compute.manager [req-7c2dfea1-a76c-445a-a6d2-666b9c59c4f7 req-e0c48e7b-39ee-4a07-b15c-a91f66dff8ea 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:39 localhost nova_compute[297130]: 2025-10-05 10:02:39.668 2 DEBUG nova.compute.manager [req-7c2dfea1-a76c-445a-a6d2-666b9c59c4f7 req-e0c48e7b-39ee-4a07-b15c-a91f66dff8ea 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 5 06:02:39 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:39.941 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}b03b99604ddd0b37d9a2fda2e8e2e6ec62c60a1f1212e3b4e055ac4453c6d20c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.583 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Sun, 05 Oct 2025 10:02:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-0fdc88be-55f7-41c8-a574-ab3848e3d008 x-openstack-request-id: req-0fdc88be-55f7-41c8-a574-ab3848e3d008 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.583 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "18f8d401-b618-4270-82ff-cd1fa3301fb0", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/18f8d401-b618-4270-82ff-cd1fa3301fb0"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/18f8d401-b618-4270-82ff-cd1fa3301fb0"}]}, {"id": "76acf371-9e6c-4c5c-aec4-748e712efe27", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/76acf371-9e6c-4c5c-aec4-748e712efe27"}]}, {"id": "97ddc44b-feec-4b28-874c-024e6ebcea56", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/97ddc44b-feec-4b28-874c-024e6ebcea56"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/97ddc44b-feec-4b28-874c-024e6ebcea56"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.583 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-0fdc88be-55f7-41c8-a574-ab3848e3d008 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.587 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/97ddc44b-feec-4b28-874c-024e6ebcea56 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}b03b99604ddd0b37d9a2fda2e8e2e6ec62c60a1f1212e3b4e055ac4453c6d20c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.612 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Sun, 05 Oct 2025 10:02:40 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c876018d-d403-4f19-a9cd-b454df3a7d43 x-openstack-request-id: req-c876018d-d403-4f19-a9cd-b454df3a7d43 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.612 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "97ddc44b-feec-4b28-874c-024e6ebcea56", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/97ddc44b-feec-4b28-874c-024e6ebcea56"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/97ddc44b-feec-4b28-874c-024e6ebcea56"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.613 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/97ddc44b-feec-4b28-874c-024e6ebcea56 used request id req-c876018d-d403-4f19-a9cd-b454df3a7d43 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.615 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'np0005471152.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '1b069d6351214d1baf4ff391a6512beb', 'user_id': 'b56f1071781246a68c1693519a9cd054', 'hostId': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.615 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.616 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.616 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.616 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.622 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b1dce7a2-b06b-4cdb-b072-ccd123742ded / tap1374da87-a9 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.622 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '15c1d09c-e9b3-46a6-b40a-8ff1e824a2af', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 90, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.616702', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6dd2e17c-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '7f400d8a2ca3fe5c1c4c90752639bc13492f0576c6a127a07a23afacb65f78d6'}]}, 'timestamp': '2025-10-05 10:02:40.623051', '_unique_id': '68c7475f4c3a468a9c4dba77fe02f397'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.629 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.632 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.660 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.write.bytes volume: 6074368 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.665 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '50e4a464-13ba-4bcd-8a06-afc030a71695', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6074368, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.633046', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6dd9327a-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': 'f62b8b30fde11952f135408cf3c547c6a07963b6604c5098ca85d7b7928d04d3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.633046', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6dd988ec-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '63722452fb8fcd93b80c9e80c123c16a21db2757c4f355944eafce79e7012340'}]}, 'timestamp': '2025-10-05 10:02:40.666539', '_unique_id': '4894f25100504087bb0b63ef6ff6497f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.668 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.670 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.670 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '78b027b9-dc0a-48fa-a487-df19992d3984', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.670332', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6dda35bc-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': 'ad2ca43b8186716bcef98cd8534b9cd79377897e2423a8c6692c01312c0e514e'}]}, 'timestamp': '2025-10-05 10:02:40.671015', '_unique_id': '7c2e03d8325e43c0a6c8284efc417e90'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.672 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.673 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.673 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.read.bytes volume: 25349632 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.674 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.read.bytes volume: 55474 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e0d5a77a-b5c2-4e46-a914-df91a7a1949b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 25349632, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.673555', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6ddabe92-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '10530124be1eb8c37326faaa3fac1e0624895bb2ffc2586408fd9adb60377d38'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 55474, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.673555', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6ddad54e-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': 'd1311f7bffffae00a1fb346877e43d0309ad9e316db829fd84d6c91760157e03'}]}, 'timestamp': '2025-10-05 10:02:40.674912', '_unique_id': 'e259f073f36e4adf99c304df89bf8e08'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.675 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.676 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.676 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.676 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.676 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.676 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf3bef65-56c2-4c62-860c-736bd2b36be2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 1, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.676716', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6ddb3138-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': 'eae94cc49a75a725281abb3e286335f18b1ae481b1ae5134d1027ddd53aacbdb'}]}, 'timestamp': '2025-10-05 10:02:40.677329', '_unique_id': 'b93e406cf36c4b6db2a237b11fcdcab5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.677 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.678 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.678 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b3fac0d6-fb56-455c-a45f-1ea14e09847d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.678953', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6ddb7f12-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '0d3409210c95643052a78a1d0b604899bab7492bb930fb77b9f3f95de1a6ff82'}]}, 'timestamp': '2025-10-05 10:02:40.679246', '_unique_id': 'e53a4a7706eb4c198e770e8e9f0fd35d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.679 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.680 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.693 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.694 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e69bbd27-b8f0-4303-8c29-8a37e69b06dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.680589', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6dddc240-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.819817142, 'message_signature': '1b93b7733eec55c3dfb712829a8bc0efcc6bb66815cd7f0378e11ee1af7c7215'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.680589', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6dddcf10-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.819817142, 'message_signature': '8f6a16289a9432f49ef013910fad3c2610469f9a9e3e29de6572bd24856364f2'}]}, 'timestamp': '2025-10-05 10:02:40.694588', '_unique_id': 'b009c85816a74f6eb0163aaaec5ebd52'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.696 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.697 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.697 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c44b6a9e-5c01-4521-a4bf-58c2e34fc78f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.697417', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6dde606a-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '50636ed66dccbf700aee4d598744734b7eb84840b626ad501ad7a9bfdd883f75'}]}, 'timestamp': '2025-10-05 10:02:40.698516', '_unique_id': '0701ead891ea467ea4d2581fd7b8e2b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.699 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.700 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.700 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.write.latency volume: 2356029032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.700 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd08b2dd9-a376-4ae5-9719-69d3747b5697', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 2356029032, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.700367', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6ddec38e-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '0d26b308b7eaf0615620652bab479af56517472580d82b820eee9f7052afbd2b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.700367', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6ddecf96-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': 'be711baa3c99628ade93c76e298ea2adae5a8c3fca5322d37e3b948c25d1bd6c'}]}, 'timestamp': '2025-10-05 10:02:40.700976', '_unique_id': '0993e0b0fbde449eab3fc700e572ca31'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.701 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.702 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.703 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8442775c-7b60-4265-82be-ab55a79b7250', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.703063', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6ddf2f4a-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '7277345fc746d2a8a685d97f8350a79ae9b575c844128dec4fa9ba65011775f8'}]}, 'timestamp': '2025-10-05 10:02:40.703448', '_unique_id': '95413214b44b4b0daed69c970e4acf04'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.704 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.705 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.723 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/cpu volume: 10860000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c0f3762b-0b82-411c-aa67-619fa8f8205a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 10860000000, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'timestamp': '2025-10-05T10:02:40.705465', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': '6de25e2c-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.862521459, 'message_signature': '49f45d19097c29f56fc2e2a0d1be0c8e6060f1aedc70d2a6633c0ed214b0a54f'}]}, 'timestamp': '2025-10-05 10:02:40.724326', '_unique_id': 'cac97db0456b48298b22ceb028f4efca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.725 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.726 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.726 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.read.requests volume: 838 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.726 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.read.requests volume: 20 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f8d9e696-9880-4d9c-8915-d78dc7416270', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 838, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.726113', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6de2b14c-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '25f9cb3861eee8f9f0dd5ed05e269101db2d3ee0e9b9b91369fc357bca83b89f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 20, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.726113', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6de2bb2e-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': 'f95a16937a717e9c86c0a9f4899c5e45ff3397ab76f9bb63e9e10832cecf2939'}]}, 'timestamp': '2025-10-05 10:02:40.726631', '_unique_id': 'ae67d87d87384db1903d3170ca47211e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.727 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.728 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.728 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.728 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.728 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b4335717-e9b9-41d5-a904-f5ba8de4b49a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.728356', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6de3094e-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': 'd00792dab27505b40da0895d32a942e68aa4c077ab6030267cc66cb9f82da3d7'}]}, 'timestamp': '2025-10-05 10:02:40.728650', '_unique_id': '19e52d76befc473ab342997540802a12'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.730 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.732 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.732 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.write.requests volume: 51 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.733 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '31fb5129-1f70-4d0c-aa02-f69ae9bd86a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 51, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.732610', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6de3ba10-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '67569f933f7eb544c70909eac96231887f6781232be179e9c9703b5d32e44e35'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.732610', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6de3d126-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '132f326cb9218c7c33ba2fd1100737e0eff7f6e84fc562159ae36d242156297e'}]}, 'timestamp': '2025-10-05 10:02:40.733872', '_unique_id': 'c5d46a587fc54bb0ac78238af7414f1f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.734 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.736 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.736 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d64f65e-90e9-48c2-9d70-d08e14183653', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.736344', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6de44bc4-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '858b790d9536fa03a30dc8f7dac6214e72290fa839f5a66a88dba7f979be2e47'}]}, 'timestamp': '2025-10-05 10:02:40.737033', '_unique_id': '3eb74d140a80400d9928ecaed9a9807c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.737 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.739 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.739 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.read.latency volume: 1322704224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.739 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.read.latency volume: 16686929 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2352036a-8ec7-48fc-8dac-e3a59f2a2313', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1322704224, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.739321', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6de4b9ce-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': '6b196083cef746431f07bc61c8535c16447647766e6dbe691bcba36c3f0b5b82'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16686929, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.739321', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6de4d044-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.772352024, 'message_signature': 'f255218db9e0ca7410df93f483741865286611f58f91c500cba7474de559a3ac'}]}, 'timestamp': '2025-10-05 10:02:40.740466', '_unique_id': 'fb6592c7357b4f61b394d4b072652ca3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.741 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.743 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.743 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ffc88ee9-7bb3-4cf7-9855-70412c1758e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.743362', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6de5579e-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '5faba59981ada5b7c842f7596e633db982d360a13cae302d98e46cf56b179666'}]}, 'timestamp': '2025-10-05 10:02:40.743925', '_unique_id': 'b8ed6a330ebf46e6952a00da14ef698b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.745 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.746 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.746 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/memory.usage volume: 40.45703125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '273fcd73-904b-4356-8ae9-c2b587d5ec4c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 40.45703125, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'timestamp': '2025-10-05T10:02:40.746672', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1}, 'message_id': '6de5dda4-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.862521459, 'message_signature': 'baa440a316544b766a639307a2005fd5648bdcf0187e854e584627ae892ac483'}]}, 'timestamp': '2025-10-05 10:02:40.747343', '_unique_id': '2ffb3d9caf344ef8812dee4ab8ac82b2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.748 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.749 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.750 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.750 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd764e77c-afbf-4dca-ab3c-30c6fad87dec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.750009', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6de659fa-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.819817142, 'message_signature': 'd0c9dc629f49b3ac5c2bfafd3f5bb0732049cc251e1747f9c9679ae55e893b4d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.750009', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6de66cf6-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.819817142, 'message_signature': '8e3cf4ac74518705b81af9666b20f3a9a60140e5f3bfa5315f58037f08bef326'}]}, 'timestamp': '2025-10-05 10:02:40.750956', '_unique_id': 'a07ddef4df2d46a9a44561cead0a3db4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.751 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.753 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.753 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.753 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.753 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.753 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.754 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6cd42fc-5638-4122-9ce2-e0c5948e4a9d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-vda', 'timestamp': '2025-10-05T10:02:40.753836', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '6de6f036-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.819817142, 'message_signature': 'ebf887e3f7de79cfe1e99a88170e055c673f78bfdfa82ea9078df3bbf09d7d3c'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded-sda', 'timestamp': '2025-10-05T10:02:40.753836', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'instance-00000007', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '6de7026a-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.819817142, 'message_signature': '041513f8bf87d211e4cfcf41fa507564d9f87e312bf7f79ab3661006496d829e'}]}, 'timestamp': '2025-10-05 10:02:40.754763', '_unique_id': '861285dfd2cc49088a37919988acee18'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.755 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.757 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.757 12 DEBUG ceilometer.compute.pollsters [-] b1dce7a2-b06b-4cdb-b072-ccd123742ded/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3bc6f526-c1bd-4f30-a2dd-c62d088a44aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b56f1071781246a68c1693519a9cd054', 'user_name': None, 'project_id': '1b069d6351214d1baf4ff391a6512beb', 'project_name': None, 'resource_id': 'instance-00000007-b1dce7a2-b06b-4cdb-b072-ccd123742ded-tap1374da87-a9', 'timestamp': '2025-10-05T10:02:40.757540', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-2001023684', 'name': 'tap1374da87-a9', 'instance_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'instance_type': 'm1.nano', 'host': '6d5fb5785cdf8efdf1f66bbd2083674bfa89f514680c5265349bf917', 'instance_host': 'np0005471152.localdomain', 'flavor': {'id': '97ddc44b-feec-4b28-874c-024e6ebcea56', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}, 'image_ref': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:4b:06:97', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap1374da87-a9'}, 'message_id': '6de781e0-a1d2-11f0-9432-fa163e3e9936', 'monotonic_time': 12036.755910165, 'message_signature': '312c6f89221587256f2aa6d09dc43d12d6d712dac8ee0a74f8d1ed1766a5e0b0'}]}, 'timestamp': '2025-10-05 10:02:40.758046', '_unique_id': 'c461783788b345e0ad3f53c9db03cb29'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging yield Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging conn.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Oct 5 06:02:40 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:02:40.759 12 ERROR oslo_messaging.notify.messaging Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.952 2 INFO nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Took 3.15 seconds for pre_live_migration on destination host np0005471150.localdomain.#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.953 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.974 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmprnxabv3g',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b1dce7a2-b06b-4cdb-b072-ccd123742ded',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(afa45f89-4fc9-4778-b6b9-a613dd5960d9),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.979 2 DEBUG nova.objects.instance [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lazy-loading 'migration_context' on Instance uuid b1dce7a2-b06b-4cdb-b072-ccd123742ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.980 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.981 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.982 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Oct 5 06:02:40 localhost nova_compute[297130]: 2025-10-05 10:02:40.999 2 DEBUG nova.virt.libvirt.vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-05T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2001023684',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2001023684',id=7,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T10:02:28Z,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b069d6351214d1baf4ff391a6512beb',ramdisk_id='',reservation_id='r-k8v41bv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1030348059',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:02:28Z,user_data=None,user_id='b56f1071781246a68c1693519a9cd054',uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:40.999 2 DEBUG nova.network.os_vif_util [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Converting VIF {"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.000 2 DEBUG nova.network.os_vif_util [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.001 2 DEBUG nova.virt.libvirt.migration [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating guest XML with vif config: Oct 5 06:02:41 localhost nova_compute[297130]: Oct 5 06:02:41 localhost nova_compute[297130]: Oct 5 06:02:41 localhost nova_compute[297130]: Oct 5 06:02:41 localhost nova_compute[297130]: Oct 5 06:02:41 localhost nova_compute[297130]: Oct 5 06:02:41 localhost nova_compute[297130]: Oct 5 06:02:41 localhost nova_compute[297130]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.001 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.485 2 DEBUG nova.virt.libvirt.migration [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.486 2 INFO nova.virt.libvirt.migration [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Oct 5 06:02:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 238 MiB data, 857 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.642 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Oct 5 06:02:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:02:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:02:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:02:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:02:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:02:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.824 2 DEBUG nova.compute.manager [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.825 2 DEBUG oslo_concurrency.lockutils [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.825 2 DEBUG oslo_concurrency.lockutils [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.825 2 DEBUG oslo_concurrency.lockutils [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.826 2 DEBUG nova.compute.manager [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.826 2 WARNING nova.compute.manager [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received unexpected event network-vif-plugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with vm_state active and task_state migrating.#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.826 2 DEBUG nova.compute.manager [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-changed-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.827 2 DEBUG nova.compute.manager [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Refreshing instance network info cache due to event network-changed-1374da87-a9a5-4840-80a7-197494b76131. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.827 2 DEBUG oslo_concurrency.lockutils [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.827 2 DEBUG oslo_concurrency.lockutils [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquired lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:02:41 localhost nova_compute[297130]: 2025-10-05 10:02:41.828 2 DEBUG nova.network.neutron [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Refreshing network info cache for port 1374da87-a9a5-4840-80a7-197494b76131 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.145 2 DEBUG nova.virt.libvirt.migration [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.145 2 DEBUG nova.virt.libvirt.migration [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.238 2 DEBUG nova.network.neutron [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updated VIF entry in instance network info cache for port 1374da87-a9a5-4840-80a7-197494b76131. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.238 2 DEBUG nova.network.neutron [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Updating instance_info_cache with network_info: [{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005471150.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.261 2 DEBUG oslo_concurrency.lockutils [req-e7703680-7f9a-41af-a56c-3420c1da6dcf req-c177bd99-de9f-4cd7-b685-453473160511 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Releasing lock "refresh_cache-b1dce7a2-b06b-4cdb-b072-ccd123742ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.557 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.557 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Paused (Lifecycle Event)#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.587 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.591 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.614 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m Oct 5 06:02:42 localhost kernel: device tap1374da87-a9 left promiscuous mode Oct 5 06:02:42 localhost NetworkManager[5970]: [1759658562.6929] device (tap1374da87-a9): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00067|binding|INFO|Releasing lport 1374da87-a9a5-4840-80a7-197494b76131 from this chassis (sb_readonly=0) Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00068|binding|INFO|Setting lport 1374da87-a9a5-4840-80a7-197494b76131 down in Southbound Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00069|binding|INFO|Releasing lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee from this chassis (sb_readonly=0) Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00070|binding|INFO|Setting lport 3fa04c44-9142-4d6c-991f-aca11ea8e8ee down in Southbound Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00071|binding|INFO|Removing iface tap1374da87-a9 ovn-installed in OVS Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00072|binding|INFO|Releasing lport bac74788-cacd-4240-bc16-90e5547e0313 from this chassis (sb_readonly=0) Oct 5 06:02:42 localhost ovn_controller[157556]: 2025-10-05T10:02:42Z|00073|binding|INFO|Releasing lport 3e3624ce-bb97-4afa-8cde-da5b0ca8ffd0 from this chassis (sb_readonly=0) Oct 5 06:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:02:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:42.716 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:90:0e 19.80.0.175'], port_security=['fa:16:3e:ce:90:0e 19.80.0.175'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['1374da87-a9a5-4840-80a7-197494b76131'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-973969040', 'neutron:cidrs': '19.80.0.175/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-973969040', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c80697f7-3043-40b9-ba7e-9e4d45b917f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3fa04c44-9142-4d6c-991f-aca11ea8e8ee) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:42.719 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:06:97 10.100.0.12'], port_security=['fa:16:3e:4b:06:97 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain,np0005471150.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': '3b30d637-702a-429f-9027-888244ff6474'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-738433439', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b1dce7a2-b06b-4cdb-b072-ccd123742ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9493e121-6caf-4009-9106-31c87685c480', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-738433439', 'neutron:project_id': '1b069d6351214d1baf4ff391a6512beb', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0269f0ba-15e7-46b3-9fe6-9a4bc91e9d33, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1374da87-a9a5-4840-80a7-197494b76131) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:42.720 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 3fa04c44-9142-4d6c-991f-aca11ea8e8ee in datapath 3b6dd988-c148-4dbf-ae5b-dba073193ccc unbound from our chassis#033[00m Oct 5 06:02:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:42.723 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b6dd988-c148-4dbf-ae5b-dba073193ccc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:42.724 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9a23f1a7-bb06-4914-9a8d-b8449e58d2e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:42.728 163201 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc namespace which is not needed anymore#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:42 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully. Oct 5 06:02:42 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 13.529s CPU time. Oct 5 06:02:42 localhost systemd-machined[206743]: Machine qemu-1-instance-00000007 terminated. Oct 5 06:02:42 localhost journal[237275]: Unable to get XATTR trusted.libvirt.security.ref_selinux on b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk: No such file or directory Oct 5 06:02:42 localhost journal[237275]: Unable to get XATTR trusted.libvirt.security.ref_dac on b1dce7a2-b06b-4cdb-b072-ccd123742ded_disk: No such file or directory Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.861 2 DEBUG nova.virt.libvirt.guest [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.862 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migration operation has completed#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.862 2 INFO nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] _post_live_migration() is started..#033[00m Oct 5 06:02:42 localhost podman[323881]: 2025-10-05 10:02:42.810507303 +0000 UTC m=+0.077289134 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:02:42 localhost systemd[1]: tmp-crun.By4iRT.mount: Deactivated successfully. Oct 5 06:02:42 localhost podman[323880]: 2025-10-05 10:02:42.870881143 +0000 UTC m=+0.142012062 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.875 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.876 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Oct 5 06:02:42 localhost nova_compute[297130]: 2025-10-05 10:02:42.876 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Oct 5 06:02:42 localhost podman[323880]: 2025-10-05 10:02:42.881998537 +0000 UTC m=+0.153129446 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:42 localhost podman[323881]: 2025-10-05 10:02:42.893283715 +0000 UTC m=+0.160065566 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:02:42 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:02:42 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:02:42 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [NOTICE] (323552) : haproxy version is 2.8.14-c23fe91 Oct 5 06:02:42 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [NOTICE] (323552) : path to executable is /usr/sbin/haproxy Oct 5 06:02:42 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [WARNING] (323552) : Exiting Master process... Oct 5 06:02:42 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [WARNING] (323552) : Exiting Master process... Oct 5 06:02:42 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [ALERT] (323552) : Current worker (323554) exited with code 143 (Terminated) Oct 5 06:02:42 localhost neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc[323548]: [WARNING] (323552) : All workers exited. Exiting... (0) Oct 5 06:02:42 localhost systemd[1]: libpod-9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519.scope: Deactivated successfully. Oct 5 06:02:42 localhost podman[323929]: 2025-10-05 10:02:42.983011678 +0000 UTC m=+0.138420354 container died 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:02:43 localhost podman[323929]: 2025-10-05 10:02:43.024093401 +0000 UTC m=+0.179502037 container cleanup 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:02:43 localhost podman[323960]: 2025-10-05 10:02:43.049938598 +0000 UTC m=+0.062247733 container cleanup 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:43 localhost systemd[1]: libpod-conmon-9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519.scope: Deactivated successfully. Oct 5 06:02:43 localhost podman[323974]: 2025-10-05 10:02:43.112626612 +0000 UTC m=+0.070141879 container remove 9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.116 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4a4eb4-acd7-4ed4-81ae-e85d111ed53c]: (4, ('Sun Oct 5 10:02:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc (9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519)\n9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519\nSun Oct 5 10:02:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc (9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519)\n9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.118 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[750637a0-557d-4595-ace9-509324ccfd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.119 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b6dd988-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:43 localhost kernel: device tap3b6dd988-c0 left promiscuous mode Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.136 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[593ea90f-7e8e-48fe-938a-0e9a6ea5fa51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.147 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[0d573987-6ff9-4754-b380-4da61527bc04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.148 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[85b24b82-9d15-4026-bbce-f08197eff17c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.159 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[34d08ffb-5faf-4376-a76a-b237e1469423]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1202635, 'reachable_time': 39351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323997, 'error': None, 'target': 'ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.168 163334 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b6dd988-c148-4dbf-ae5b-dba073193ccc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.169 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[80e3ac11-e21b-4439-a573-eee0af84bf36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.170 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 1374da87-a9a5-4840-80a7-197494b76131 in datapath 9493e121-6caf-4009-9106-31c87685c480 unbound from our chassis#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.173 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9493e121-6caf-4009-9106-31c87685c480, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.173 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[0940b905-378f-4ecc-8c2f-2c1085ed0ac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.174 163201 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9493e121-6caf-4009-9106-31c87685c480 namespace which is not needed anymore#033[00m Oct 5 06:02:43 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [NOTICE] (323627) : haproxy version is 2.8.14-c23fe91 Oct 5 06:02:43 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [NOTICE] (323627) : path to executable is /usr/sbin/haproxy Oct 5 06:02:43 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [WARNING] (323627) : Exiting Master process... Oct 5 06:02:43 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [WARNING] (323627) : Exiting Master process... Oct 5 06:02:43 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [ALERT] (323627) : Current worker (323629) exited with code 143 (Terminated) Oct 5 06:02:43 localhost neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480[323623]: [WARNING] (323627) : All workers exited. Exiting... (0) Oct 5 06:02:43 localhost systemd[1]: libpod-f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8.scope: Deactivated successfully. Oct 5 06:02:43 localhost podman[324016]: 2025-10-05 10:02:43.35437167 +0000 UTC m=+0.072993877 container died f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:02:43 localhost podman[324016]: 2025-10-05 10:02:43.383913046 +0000 UTC m=+0.102535223 container cleanup f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:02:43 localhost podman[324029]: 2025-10-05 10:02:43.445439508 +0000 UTC m=+0.093439594 container cleanup f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:02:43 localhost systemd[1]: libpod-conmon-f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8.scope: Deactivated successfully. Oct 5 06:02:43 localhost podman[324043]: 2025-10-05 10:02:43.484060894 +0000 UTC m=+0.080333176 container remove f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.489 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[40007611-1bb0-4cd5-a482-6a488cd201a2]: (4, ('Sun Oct 5 10:02:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480 (f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8)\nf607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8\nSun Oct 5 10:02:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9493e121-6caf-4009-9106-31c87685c480 (f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8)\nf607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.491 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0fac7f-1832-4513-85a8-3a03e7d4fd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.492 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9493e121-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:43 localhost kernel: device tap9493e121-60 left promiscuous mode Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.541 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb54f79-6077-49e1-aaf6-2ee21e6f5965]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 264 MiB data, 930 MiB used, 41 GiB / 42 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 226 op/s Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.560 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[080f6d37-2ce3-422a-a07a-bda2bfd0f9dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.561 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[534331ce-4a42-4b24-b25e-709298382b21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.577 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[10ae5534-fe97-4475-b322-ea6481c12024]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1202759, 'reachable_time': 18890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324067, 'error': None, 'target': 'ovnmeta-9493e121-6caf-4009-9106-31c87685c480', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.579 163334 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9493e121-6caf-4009-9106-31c87685c480 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:02:43 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:43.579 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8aa98-612a-4831-8144-5c9f3ac22fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.615 2 DEBUG nova.compute.manager [req-4bb6bb71-a7f0-44df-9a16-fff9ab3b917a req-14db2f9c-3a6e-4014-a6e6-d73c1b7630f8 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.616 2 DEBUG oslo_concurrency.lockutils [req-4bb6bb71-a7f0-44df-9a16-fff9ab3b917a req-14db2f9c-3a6e-4014-a6e6-d73c1b7630f8 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.616 2 DEBUG oslo_concurrency.lockutils [req-4bb6bb71-a7f0-44df-9a16-fff9ab3b917a req-14db2f9c-3a6e-4014-a6e6-d73c1b7630f8 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.617 2 DEBUG oslo_concurrency.lockutils [req-4bb6bb71-a7f0-44df-9a16-fff9ab3b917a req-14db2f9c-3a6e-4014-a6e6-d73c1b7630f8 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.617 2 DEBUG nova.compute.manager [req-4bb6bb71-a7f0-44df-9a16-fff9ab3b917a req-14db2f9c-3a6e-4014-a6e6-d73c1b7630f8 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] No waiting events found dispatching network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:02:43 localhost nova_compute[297130]: 2025-10-05 10:02:43.618 2 DEBUG nova.compute.manager [req-4bb6bb71-a7f0-44df-9a16-fff9ab3b917a req-14db2f9c-3a6e-4014-a6e6-d73c1b7630f8 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Received event network-vif-unplugged-1374da87-a9a5-4840-80a7-197494b76131 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Oct 5 06:02:43 localhost systemd[1]: var-lib-containers-storage-overlay-9a670a74024f4ca1ba69bdb9356fc4c8b47f6b88b6007d6e1fab4614293754e3-merged.mount: Deactivated successfully. Oct 5 06:02:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f607573884f664df8d85feb5f123eb05e9ae351c9580b99a152528c13a2674a8-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:43 localhost systemd[1]: run-netns-ovnmeta\x2d9493e121\x2d6caf\x2d4009\x2d9106\x2d31c87685c480.mount: Deactivated successfully. Oct 5 06:02:43 localhost systemd[1]: var-lib-containers-storage-overlay-b0ed2e7a0bf04b553ef494a49479d568c87b045656707d19d52fca310600b72b-merged.mount: Deactivated successfully. Oct 5 06:02:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ce37290097068605de51f950dd0f1e742f5be23fe0efa2b549f0c7c0fa03519-userdata-shm.mount: Deactivated successfully. Oct 5 06:02:43 localhost systemd[1]: run-netns-ovnmeta\x2d3b6dd988\x2dc148\x2d4dbf\x2dae5b\x2ddba073193ccc.mount: Deactivated successfully. Oct 5 06:02:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.024 2 DEBUG nova.network.neutron [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Activated binding for port 1374da87-a9a5-4840-80a7-197494b76131 and host np0005471150.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.025 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.026 2 DEBUG nova.virt.libvirt.vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-05T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2001023684',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2001023684',id=7,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-10-05T10:02:28Z,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b069d6351214d1baf4ff391a6512beb',ramdisk_id='',reservation_id='r-k8v41bv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1030348059',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1030348059-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:02:36Z,user_data=None,user_id='b56f1071781246a68c1693519a9cd054',uuid=b1dce7a2-b06b-4cdb-b072-ccd123742ded,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.027 2 DEBUG nova.network.os_vif_util [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Converting VIF {"id": "1374da87-a9a5-4840-80a7-197494b76131", "address": "fa:16:3e:4b:06:97", "network": {"id": "9493e121-6caf-4009-9106-31c87685c480", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-160158674-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b069d6351214d1baf4ff391a6512beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1374da87-a9", "ovs_interfaceid": "1374da87-a9a5-4840-80a7-197494b76131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.028 2 DEBUG nova.network.os_vif_util [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.029 2 DEBUG os_vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1374da87-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.043 2 INFO os_vif [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:06:97,bridge_name='br-int',has_traffic_filtering=True,id=1374da87-a9a5-4840-80a7-197494b76131,network=Network(9493e121-6caf-4009-9106-31c87685c480),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1374da87-a9')#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.043 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.044 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.044 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.045 2 DEBUG nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.045 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Deleting instance files /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded_del#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.046 2 INFO nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Deletion of /var/lib/nova/instances/b1dce7a2-b06b-4cdb-b072-ccd123742ded_del complete#033[00m Oct 5 06:02:44 localhost nova_compute[297130]: 2025-10-05 10:02:44.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 264 MiB data, 930 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 152 op/s Oct 5 06:02:46 localhost openstack_network_exporter[250246]: ERROR 10:02:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:46 localhost openstack_network_exporter[250246]: ERROR 10:02:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:02:46 localhost openstack_network_exporter[250246]: ERROR 10:02:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:02:46 localhost openstack_network_exporter[250246]: ERROR 10:02:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:02:46 localhost openstack_network_exporter[250246]: Oct 5 06:02:46 localhost openstack_network_exporter[250246]: ERROR 10:02:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:02:46 localhost openstack_network_exporter[250246]: Oct 5 06:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:02:46 localhost podman[324068]: 2025-10-05 10:02:46.908613635 +0000 UTC m=+0.077981343 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true) Oct 5 06:02:46 localhost podman[324068]: 2025-10-05 10:02:46.921125326 +0000 UTC m=+0.090493074 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:02:46 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:02:46 localhost podman[324069]: 2025-10-05 10:02:46.96844869 +0000 UTC m=+0.134272112 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:02:47 localhost podman[324069]: 2025-10-05 10:02:47.011699602 +0000 UTC m=+0.177523024 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Oct 5 06:02:47 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.052 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.053 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.053 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "b1dce7a2-b06b-4cdb-b072-ccd123742ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.070 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.071 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.072 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.072 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.073 2 DEBUG oslo_concurrency.processutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:47 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2467348954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.517 2 DEBUG oslo_concurrency.processutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.710 2 WARNING nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.712 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11644MB free_disk=41.639156341552734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.712 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.713 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.757 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Migration for instance b1dce7a2-b06b-4cdb-b072-ccd123742ded refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.779 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.807 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Migration afa45f89-4fc9-4778-b6b9-a613dd5960d9 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.808 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.808 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:02:47 localhost nova_compute[297130]: 2025-10-05 10:02:47.845 2 DEBUG oslo_concurrency.processutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:48 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/242525995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.293 2 DEBUG oslo_concurrency.processutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.302 2 DEBUG nova.compute.provider_tree [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.357 2 ERROR nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [req-517a2110-97bb-408f-a5e3-823a42d09948] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID 36221146-244b-49ab-8700-5471fa19d0c5. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-517a2110-97bb-408f-a5e3-823a42d09948"}]}#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.385 2 DEBUG nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.409 2 DEBUG nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.410 2 DEBUG nova.compute.provider_tree [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.452 2 DEBUG nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.483 2 DEBUG nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:02:48 localhost nova_compute[297130]: 2025-10-05 10:02:48.540 2 DEBUG oslo_concurrency.processutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:02:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:49 localhost systemd[1]: tmp-crun.1dYfBZ.mount: Deactivated successfully. Oct 5 06:02:49 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:02:49 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:49 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:49 localhost podman[324192]: 2025-10-05 10:02:49.074621532 +0000 UTC m=+0.068658749 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:02:49 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4092279251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.118 2 DEBUG oslo_concurrency.processutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.125 2 DEBUG nova.compute.provider_tree [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.184 2 DEBUG nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updated inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 with generation 5 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.185 2 DEBUG nova.compute.provider_tree [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updating resource provider 36221146-244b-49ab-8700-5471fa19d0c5 generation from 5 to 6 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.185 2 DEBUG nova.compute.provider_tree [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.218 2 DEBUG nova.compute.resource_tracker [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.218 2 DEBUG oslo_concurrency.lockutils [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.224 2 INFO nova.compute.manager [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Migrating instance to np0005471150.localdomain finished successfully.#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.344 2 INFO nova.scheduler.client.report [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] Deleted allocation for migration afa45f89-4fc9-4778-b6b9-a613dd5960d9#033[00m Oct 5 06:02:49 localhost nova_compute[297130]: 2025-10-05 10:02:49.344 2 DEBUG nova.virt.libvirt.driver [None req-8958f746-4503-48be-b2a9-7764a3a89978 5d6dc4b83ba2400786360753fb6dcb65 e7117de923d14d3491e796ec245562e0 - - default default] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Oct 5 06:02:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s Oct 5 06:02:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s Oct 5 06:02:52 localhost nova_compute[297130]: 2025-10-05 10:02:52.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:52 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:52.743 2 INFO neutron.agent.securitygroups_rpc [None req-750911e5-7403-463f-90a2-497baee3fcf4 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e93 e93: 6 total, 6 up, 6 in Oct 5 06:02:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 532 KiB/s rd, 2.6 MiB/s wr, 128 op/s Oct 5 06:02:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:54 localhost nova_compute[297130]: 2025-10-05 10:02:54.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:54 localhost neutron_sriov_agent[264647]: 2025-10-05 10:02:54.066 2 INFO neutron.agent.securitygroups_rpc [None req-1d6ffaf6-6781-4c7e-bba6-1906ee2d0509 b56f1071781246a68c1693519a9cd054 1b069d6351214d1baf4ff391a6512beb - - default default] Security group member updated ['a4a2342d-6cdc-4d3d-bd2e-5538a6a6c149']#033[00m Oct 5 06:02:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v121: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 532 KiB/s rd, 2.6 MiB/s wr, 128 op/s Oct 5 06:02:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e94 e94: 6 total, 6 up, 6 in Oct 5 06:02:56 localhost podman[248157]: time="2025-10-05T10:02:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:02:56 localhost podman[248157]: @ - - [05/Oct/2025:10:02:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148141 "" "Go-http-client/1.1" Oct 5 06:02:56 localhost podman[248157]: @ - - [05/Oct/2025:10:02:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19806 "" "Go-http-client/1.1" Oct 5 06:02:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e95 e95: 6 total, 6 up, 6 in Oct 5 06:02:57 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:02:57 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:02:57 localhost podman[324231]: 2025-10-05 10:02:57.113699808 +0000 UTC m=+0.062913501 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:02:57 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:02:57 localhost nova_compute[297130]: 2025-10-05 10:02:57.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:57 localhost nova_compute[297130]: 2025-10-05 10:02:57.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 8.6 MiB/s rd, 12 MiB/s wr, 362 op/s Oct 5 06:02:57 localhost nova_compute[297130]: 2025-10-05 10:02:57.868 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:02:57 localhost nova_compute[297130]: 2025-10-05 10:02:57.869 2 INFO nova.compute.manager [-] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] VM Stopped (Lifecycle Event)#033[00m Oct 5 06:02:57 localhost nova_compute[297130]: 2025-10-05 10:02:57.890 2 DEBUG nova.compute.manager [None req-c4b60141-f6e5-4ed3-834c-3ac799b5b481 - - - - - -] [instance: b1dce7a2-b06b-4cdb-b072-ccd123742ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:02:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:02:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:02:58 localhost podman[324254]: 2025-10-05 10:02:58.911012877 +0000 UTC m=+0.078581269 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:02:58 localhost podman[324254]: 2025-10-05 10:02:58.925248956 +0000 UTC m=+0.092817388 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:02:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:02:58 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:02:59 localhost podman[324253]: 2025-10-05 10:02:59.018180386 +0000 UTC m=+0.186913979 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Oct 5 06:02:59 localhost nova_compute[297130]: 2025-10-05 10:02:59.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:59 localhost podman[324253]: 2025-10-05 10:02:59.058240532 +0000 UTC m=+0.226974155 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm) Oct 5 06:02:59 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:02:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 172 op/s Oct 5 06:02:59 localhost dnsmasq[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/addn_hosts - 0 addresses Oct 5 06:02:59 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/host Oct 5 06:02:59 localhost dnsmasq-dhcp[322310]: read /var/lib/neutron/dhcp/3f4911a6-9be1-4156-824f-838e2bac1e4b/opts Oct 5 06:02:59 localhost podman[324308]: 2025-10-05 10:02:59.617661583 +0000 UTC m=+0.074514567 container kill 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:02:59 localhost ovn_controller[157556]: 2025-10-05T10:02:59Z|00074|binding|INFO|Releasing lport abd8ebc9-bda1-4083-a93e-d6e6832b93e2 from this chassis (sb_readonly=0) Oct 5 06:02:59 localhost nova_compute[297130]: 2025-10-05 10:02:59.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:02:59 localhost ovn_controller[157556]: 2025-10-05T10:02:59Z|00075|binding|INFO|Setting lport abd8ebc9-bda1-4083-a93e-d6e6832b93e2 down in Southbound Oct 5 06:02:59 localhost kernel: device tapabd8ebc9-bd left promiscuous mode Oct 5 06:02:59 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:59.804 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-3f4911a6-9be1-4156-824f-838e2bac1e4b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f4911a6-9be1-4156-824f-838e2bac1e4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7117de923d14d3491e796ec245562e0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e574b21f-048d-453d-bca1-af20eedc296c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=abd8ebc9-bda1-4083-a93e-d6e6832b93e2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:02:59 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:59.806 163201 INFO neutron.agent.ovn.metadata.agent [-] Port abd8ebc9-bda1-4083-a93e-d6e6832b93e2 in datapath 3f4911a6-9be1-4156-824f-838e2bac1e4b unbound from our chassis#033[00m Oct 5 06:02:59 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:59.808 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f4911a6-9be1-4156-824f-838e2bac1e4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:02:59 localhost ovn_metadata_agent[163196]: 2025-10-05 10:02:59.810 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[d02543e8-4228-4228-a73f-a7ca16ce901f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:02:59 localhost nova_compute[297130]: 2025-10-05 10:02:59.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:00 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:00.019 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:02:59Z, description=, device_id=99c527c1-4fe2-49c1-8a4e-62e5a076f526, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eabf9fee-3fda-416a-aa62-de6b2cc2ba5b, ip_allocation=immediate, mac_address=fa:16:3e:9a:33:25, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=682, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:02:59Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:03:00 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:03:00 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:03:00 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:03:00 localhost podman[324347]: 2025-10-05 10:03:00.245935148 +0000 UTC m=+0.066495209 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:03:00 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:00.502 271653 INFO neutron.agent.dhcp.agent [None req-8b634740-71ed-4aeb-8228-154ad513241d - - - - - -] DHCP configuration for ports {'eabf9fee-3fda-416a-aa62-de6b2cc2ba5b'} is completed#033[00m Oct 5 06:03:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:03:00 localhost systemd[1]: tmp-crun.rT50fr.mount: Deactivated successfully. Oct 5 06:03:00 localhost podman[324369]: 2025-10-05 10:03:00.926698336 +0000 UTC m=+0.087436962 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal) Oct 5 06:03:00 localhost podman[324369]: 2025-10-05 10:03:00.965033703 +0000 UTC m=+0.125772329 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=) Oct 5 06:03:00 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:03:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:01.348 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:03:01Z, description=, device_id=629de7b7-b3b3-4aac-a8ff-1872240a20af, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c292a1c3-cae8-4386-9c22-83051491879a, ip_allocation=immediate, mac_address=fa:16:3e:a1:c0:8a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=684, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:03:01Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:03:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e96 e96: 6 total, 6 up, 6 in Oct 5 06:03:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 173 op/s Oct 5 06:03:01 localhost nova_compute[297130]: 2025-10-05 10:03:01.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:01 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:03:01 localhost podman[324407]: 2025-10-05 10:03:01.676655996 +0000 UTC m=+0.065775589 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:01 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:03:01 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:03:02 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:02.061 271653 INFO neutron.agent.dhcp.agent [None req-1cfc47b8-60c9-4b20-b718-7bb3a74894ec - - - - - -] DHCP configuration for ports {'c292a1c3-cae8-4386-9c22-83051491879a'} is completed#033[00m Oct 5 06:03:02 localhost nova_compute[297130]: 2025-10-05 10:03:02.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:02 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:03:02 localhost podman[324444]: 2025-10-05 10:03:02.750238123 +0000 UTC m=+0.063382984 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:02 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:03:02 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:03:02 localhost nova_compute[297130]: 2025-10-05 10:03:02.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:03 localhost nova_compute[297130]: 2025-10-05 10:03:03.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:03 localhost systemd[1]: tmp-crun.fBojKg.mount: Deactivated successfully. Oct 5 06:03:03 localhost dnsmasq[322310]: exiting on receipt of SIGTERM Oct 5 06:03:03 localhost podman[324480]: 2025-10-05 10:03:03.370832806 +0000 UTC m=+0.086949538 container kill 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:03:03 localhost systemd[1]: libpod-52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883.scope: Deactivated successfully. Oct 5 06:03:03 localhost podman[324493]: 2025-10-05 10:03:03.45296512 +0000 UTC m=+0.062601381 container died 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:03:03 localhost podman[324493]: 2025-10-05 10:03:03.483325501 +0000 UTC m=+0.092961752 container cleanup 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:03:03 localhost systemd[1]: libpod-conmon-52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883.scope: Deactivated successfully. Oct 5 06:03:03 localhost podman[324494]: 2025-10-05 10:03:03.540129364 +0000 UTC m=+0.147006000 container remove 52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f4911a6-9be1-4156-824f-838e2bac1e4b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:03:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 224 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 197 op/s Oct 5 06:03:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:03.568 271653 INFO neutron.agent.dhcp.agent [None req-200d843b-c427-45bf-b8d5-13839c02ddae - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:03:03 localhost systemd[1]: var-lib-containers-storage-overlay-e7aca00c32c75cabb9276c0698ac2491b6a89b1d59c1367bf888981e5707f38f-merged.mount: Deactivated successfully. Oct 5 06:03:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52988a7dd8ebe86a7dfc9200dcfffc0b2c0359341e8e8bc08e0509cb4aa7a883-userdata-shm.mount: Deactivated successfully. Oct 5 06:03:03 localhost systemd[1]: run-netns-qdhcp\x2d3f4911a6\x2d9be1\x2d4156\x2d824f\x2d838e2bac1e4b.mount: Deactivated successfully. Oct 5 06:03:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:04 localhost nova_compute[297130]: 2025-10-05 10:03:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:04.232 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:03:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 224 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 5.3 MiB/s rd, 5.2 MiB/s wr, 176 op/s Oct 5 06:03:06 localhost nova_compute[297130]: 2025-10-05 10:03:06.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:03:06 localhost podman[324520]: 2025-10-05 10:03:06.912155108 +0000 UTC m=+0.076448691 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:03:06 localhost podman[324520]: 2025-10-05 10:03:06.946167697 +0000 UTC m=+0.110461280 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:06 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:03:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e97 e97: 6 total, 6 up, 6 in Oct 5 06:03:07 localhost nova_compute[297130]: 2025-10-05 10:03:07.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 7 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 159 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 6.8 MiB/s rd, 5.8 MiB/s wr, 213 op/s Oct 5 06:03:07 localhost nova_compute[297130]: 2025-10-05 10:03:07.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.449 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.450 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.467 2 DEBUG nova.compute.manager [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.554 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.554 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.559 2 DEBUG nova.virt.hardware [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.559 2 INFO nova.compute.claims [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Claim successful on node np0005471152.localdomain#033[00m Oct 5 06:03:08 localhost nova_compute[297130]: 2025-10-05 10:03:08.730 2 DEBUG oslo_concurrency.processutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.002 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:09 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/192773651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.173 2 DEBUG oslo_concurrency.processutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.178 2 DEBUG nova.compute.provider_tree [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.194 2 DEBUG nova.scheduler.client.report [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.229 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.230 2 DEBUG nova.compute.manager [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.279 2 DEBUG nova.compute.claims [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Aborting claim: abort /usr/lib/python3.9/site-packages/nova/compute/claims.py:85#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.281 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.282 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.394 2 DEBUG oslo_concurrency.processutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 7 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 159 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 6.7 MiB/s rd, 5.8 MiB/s wr, 210 op/s Oct 5 06:03:09 localhost neutron_sriov_agent[264647]: 2025-10-05 10:03:09.574 2 INFO neutron.agent.securitygroups_rpc [None req-658ae260-e241-4207-aff8-7d411df32887 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group rule updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:09 localhost neutron_sriov_agent[264647]: 2025-10-05 10:03:09.762 2 INFO neutron.agent.securitygroups_rpc [None req-2959de52-f7c2-4f59-a482-69a1686405fe d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group rule updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:09 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2748692096' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.818 2 DEBUG oslo_concurrency.processutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.824 2 DEBUG nova.compute.provider_tree [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.877 2 DEBUG nova.scheduler.client.report [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.909 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.abort_instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.911 2 DEBUG nova.compute.utils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Conflict updating instance a601fbb5-19b2-4885-9e5e-3dc6daea4182. Expected: {'task_state': [None]}. Actual: {'task_state': 'deleting'} notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.912 2 DEBUG nova.compute.manager [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Instance disappeared during build. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2483#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.912 2 DEBUG nova.compute.manager [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.913 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "refresh_cache-a601fbb5-19b2-4885-9e5e-3dc6daea4182" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.913 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquired lock "refresh_cache-a601fbb5-19b2-4885-9e5e-3dc6daea4182" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:09 localhost nova_compute[297130]: 2025-10-05 10:03:09.914 2 DEBUG nova.network.neutron [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.319 2 DEBUG nova.network.neutron [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.554 2 DEBUG nova.network.neutron [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.573 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Releasing lock "refresh_cache-a601fbb5-19b2-4885-9e5e-3dc6daea4182" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.573 2 DEBUG nova.compute.manager [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.574 2 DEBUG nova.compute.manager [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Skipping network deallocation for instance since networking was not requested. _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2255#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.669 2 INFO nova.scheduler.client.report [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Deleted allocations for instance a601fbb5-19b2-4885-9e5e-3dc6daea4182#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.670 2 DEBUG oslo_concurrency.lockutils [None req-9a36957a-3578-4cfb-a281-de4972414c86 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 2.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.671 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 1.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.671 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.672 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.672 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.673 2 INFO nova.compute.manager [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Terminating instance#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.675 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquiring lock "refresh_cache-a601fbb5-19b2-4885-9e5e-3dc6daea4182" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.675 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Acquired lock "refresh_cache-a601fbb5-19b2-4885-9e5e-3dc6daea4182" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.676 2 DEBUG nova.network.neutron [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.717 2 DEBUG nova.network.neutron [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.852 2 DEBUG nova.network.neutron [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.867 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Releasing lock "refresh_cache-a601fbb5-19b2-4885-9e5e-3dc6daea4182" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.868 2 DEBUG nova.compute.manager [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.873 2 DEBUG nova.virt.libvirt.driver [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.873 2 INFO nova.virt.libvirt.driver [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Instance destroyed successfully.#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.874 2 DEBUG nova.objects.instance [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lazy-loading 'resources' on Instance uuid a601fbb5-19b2-4885-9e5e-3dc6daea4182 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.922 2 INFO nova.virt.libvirt.driver [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Deletion of /var/lib/nova/instances/a601fbb5-19b2-4885-9e5e-3dc6daea4182_del complete#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.995 2 INFO nova.compute.manager [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Took 0.13 seconds to destroy the instance on the hypervisor.#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.995 2 DEBUG oslo.service.loopingcall [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.996 2 DEBUG nova.compute.manager [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Oct 5 06:03:10 localhost nova_compute[297130]: 2025-10-05 10:03:10.996 2 DEBUG nova.network.neutron [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Oct 5 06:03:11 localhost nova_compute[297130]: 2025-10-05 10:03:11.025 2 DEBUG nova.network.neutron [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:11 localhost nova_compute[297130]: 2025-10-05 10:03:11.038 2 DEBUG nova.network.neutron [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:11 localhost nova_compute[297130]: 2025-10-05 10:03:11.055 2 INFO nova.compute.manager [-] [instance: a601fbb5-19b2-4885-9e5e-3dc6daea4182] Took 0.06 seconds to deallocate network for instance.#033[00m Oct 5 06:03:11 localhost nova_compute[297130]: 2025-10-05 10:03:11.251 2 DEBUG oslo_concurrency.lockutils [None req-766e5a39-3ed1-41b0-b739-d9a77909e579 65235d980b7041aaaef129667a5c4885 f6c89969b44f4e0daf119e7d3233303f - - default default] Lock "a601fbb5-19b2-4885-9e5e-3dc6daea4182" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:03:11 Oct 5 06:03:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:03:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:03:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'manila_data', '.mgr', 'manila_metadata', 'images'] Oct 5 06:03:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:03:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 7 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 159 active+clean; 304 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 170 op/s Oct 5 06:03:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:03:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:03:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:03:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006584572166387493 of space, bias 1.0, pg target 1.3169144332774985 quantized to 32 (current 32) Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007547503140703518 of space, bias 1.0, pg target 1.501953125 quantized to 32 (current 32) Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:03:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019400387353433835 quantized to 16 (current 16) Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:03:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:03:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:03:12 localhost nova_compute[297130]: 2025-10-05 10:03:12.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.083 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.084 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.137 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.256 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.257 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.262 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.263 2 INFO nova.compute.claims [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Claim successful on node np0005471152.localdomain#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.393 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.502 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.503 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.503 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.504 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.525 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.526 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.526 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.527 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.527 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 186 op/s Oct 5 06:03:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:03:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:03:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:13 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/369145161' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.908 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.914 2 DEBUG nova.compute.provider_tree [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:13 localhost systemd[1]: tmp-crun.W49QIj.mount: Deactivated successfully. Oct 5 06:03:13 localhost podman[324619]: 2025-10-05 10:03:13.926687269 +0000 UTC m=+0.099779627 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Oct 5 06:03:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.952 2 DEBUG nova.scheduler.client.report [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:13 localhost podman[324619]: 2025-10-05 10:03:13.960542755 +0000 UTC m=+0.133635133 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:03:13 localhost podman[324620]: 2025-10-05 10:03:13.968480832 +0000 UTC m=+0.136926054 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:03:13 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:03:13 localhost podman[324620]: 2025-10-05 10:03:13.981314193 +0000 UTC m=+0.149759435 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:03:13 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.988 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:13 localhost nova_compute[297130]: 2025-10-05 10:03:13.989 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.050 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.050 2 DEBUG nova.network.neutron [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.068 2 INFO nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.088 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.138 2 DEBUG nova.policy [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd653613d543e463ab1cad06b2f955cc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d385dfb4a744527807f14f2c315ebb6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.199 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.201 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.202 2 INFO nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Creating image(s)#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.231 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.262 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.291 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.295 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.313 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.368 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.370 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "b315ad9cd7995c7800ecf94222a7c08b7e34bf34" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.372 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "b315ad9cd7995c7800ecf94222a7c08b7e34bf34" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.372 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "b315ad9cd7995c7800ecf94222a7c08b7e34bf34" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.404 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:14 localhost nova_compute[297130]: 2025-10-05 10:03:14.410 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.001 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b315ad9cd7995c7800ecf94222a7c08b7e34bf34 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:15 localhost neutron_sriov_agent[264647]: 2025-10-05 10:03:15.071 2 INFO neutron.agent.securitygroups_rpc [req-007b9dd9-858a-4fdf-818c-a5235e42ef11 req-32195d07-d5fb-43db-8bc6-dbd8908366b6 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group member updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.092 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] resizing rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.242 2 DEBUG nova.objects.instance [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lazy-loading 'migration_context' on Instance uuid 93bc594f-2d55-4daf-8d7f-ff1682a13ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.364 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.364 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Ensure instance console log exists: /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.365 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.366 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.366 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:15 localhost nova_compute[297130]: 2025-10-05 10:03:15.490 2 DEBUG nova.network.neutron [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Successfully created port: d48131a7-f387-4e0a-975d-d2f8cc362d7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Oct 5 06:03:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 186 op/s Oct 5 06:03:15 localhost systemd[1]: tmp-crun.aE6ZFV.mount: Deactivated successfully. Oct 5 06:03:15 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:03:15 localhost podman[324846]: 2025-10-05 10:03:15.988320354 +0000 UTC m=+0.058278544 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:03:15 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:03:15 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.235 2 DEBUG nova.network.neutron [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Successfully updated port: d48131a7-f387-4e0a-975d-d2f8cc362d7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.256 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.256 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquired lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.256 2 DEBUG nova.network.neutron [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.275 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.289 2 DEBUG nova.compute.manager [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Received event network-changed-d48131a7-f387-4e0a-975d-d2f8cc362d7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.290 2 DEBUG nova.compute.manager [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Refreshing instance network info cache due to event network-changed-d48131a7-f387-4e0a-975d-d2f8cc362d7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.290 2 DEBUG oslo_concurrency.lockutils [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.341 2 DEBUG nova.network.neutron [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Oct 5 06:03:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e98 e98: 6 total, 6 up, 6 in Oct 5 06:03:16 localhost openstack_network_exporter[250246]: ERROR 10:03:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:03:16 localhost openstack_network_exporter[250246]: ERROR 10:03:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:16 localhost openstack_network_exporter[250246]: ERROR 10:03:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:16 localhost openstack_network_exporter[250246]: ERROR 10:03:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:03:16 localhost openstack_network_exporter[250246]: Oct 5 06:03:16 localhost openstack_network_exporter[250246]: ERROR 10:03:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:03:16 localhost openstack_network_exporter[250246]: Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.739 2 DEBUG nova.network.neutron [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updating instance_info_cache with network_info: [{"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.764 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Releasing lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.765 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Instance network_info: |[{"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.766 2 DEBUG oslo_concurrency.lockutils [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquired lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.766 2 DEBUG nova.network.neutron [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Refreshing network info cache for port d48131a7-f387-4e0a-975d-d2f8cc362d7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.771 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Start _get_guest_xml network_info=[{"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-05T10:00:39Z,direct_url=,disk_format='qcow2',id=6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b36437b65444bcdac75beef77b6981e',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-05T10:00:40Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.780 2 WARNING nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.783 2 DEBUG nova.virt.libvirt.host [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Searching host: 'np0005471152.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.783 2 DEBUG nova.virt.libvirt.host [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.785 2 DEBUG nova.virt.libvirt.host [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Searching host: 'np0005471152.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.786 2 DEBUG nova.virt.libvirt.host [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.786 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.787 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-05T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='97ddc44b-feec-4b28-874c-024e6ebcea56',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-05T10:00:39Z,direct_url=,disk_format='qcow2',id=6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='8b36437b65444bcdac75beef77b6981e',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-10-05T10:00:40Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.787 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.788 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.788 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.788 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.789 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.789 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.789 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.790 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.790 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.790 2 DEBUG nova.virt.hardware [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Oct 5 06:03:16 localhost nova_compute[297130]: 2025-10-05 10:03:16.794 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:03:17 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/750140938' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.222 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.254 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.259 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.280 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.280 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.281 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.314 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.315 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.315 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 123 op/s Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.675 2 DEBUG nova.network.neutron [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updated VIF entry in instance network info cache for port d48131a7-f387-4e0a-975d-d2f8cc362d7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.675 2 DEBUG nova.network.neutron [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updating instance_info_cache with network_info: [{"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.694 2 DEBUG oslo_concurrency.lockutils [req-4f8a758c-f71f-4f00-894b-53aebb7658c9 req-25277af7-8daa-4d08-8072-0cd9276cbf8f 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Releasing lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:03:17 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/769117360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:03:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:17 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2918118936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.715 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.716 2 DEBUG nova.virt.libvirt.vif [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T10:03:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPetdIzX/DmYbuho/tXYoT+1fIe7+15KwHkqrQ2Jxm3DnDcJjrE6cq7QOdR7SpvKf/EdYyjCR4NQsyAcA0uFCUjYiFoXcP0oy/CffHrzk3+7jJw6fwvaC/fOGojbc79jbA==',key_name='tempest-keypair-1153912512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d385dfb4a744527807f14f2c315ebb6',ramdisk_id='',reservation_id='r-mjkjk40f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-339982464',owner_user_name='tempest-ServersV294TestFqdnHostnames-339982464-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-05T10:03:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d653613d543e463ab1cad06b2f955cc8',uuid=93bc594f-2d55-4daf-8d7f-ff1682a13ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.717 2 DEBUG nova.network.os_vif_util [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Converting VIF {"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.718 2 DEBUG nova.network.os_vif_util [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.720 2 DEBUG nova.objects.instance [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93bc594f-2d55-4daf-8d7f-ff1682a13ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.725 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.750 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] End _get_guest_xml xml= Oct 5 06:03:17 localhost nova_compute[297130]: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf Oct 5 06:03:17 localhost nova_compute[297130]: instance-0000000a Oct 5 06:03:17 localhost nova_compute[297130]: 131072 Oct 5 06:03:17 localhost nova_compute[297130]: 1 Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: guest-instance-1 Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:16 Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: 128 Oct 5 06:03:17 localhost nova_compute[297130]: 1 Oct 5 06:03:17 localhost nova_compute[297130]: 0 Oct 5 06:03:17 localhost nova_compute[297130]: 0 Oct 5 06:03:17 localhost nova_compute[297130]: 1 Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: tempest-ServersV294TestFqdnHostnames-339982464-project-member Oct 5 06:03:17 localhost nova_compute[297130]: tempest-ServersV294TestFqdnHostnames-339982464 Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: RDO Oct 5 06:03:17 localhost nova_compute[297130]: OpenStack Compute Oct 5 06:03:17 localhost nova_compute[297130]: 27.5.2-0.20250829104910.6f8decf.el9 Oct 5 06:03:17 localhost nova_compute[297130]: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf Oct 5 06:03:17 localhost nova_compute[297130]: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf Oct 5 06:03:17 localhost nova_compute[297130]: Virtual Machine Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: hvm Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: /dev/urandom Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: Oct 5 06:03:17 localhost nova_compute[297130]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.751 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Preparing to wait for external event network-vif-plugged-d48131a7-f387-4e0a-975d-d2f8cc362d7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.751 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.751 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.752 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.753 2 DEBUG nova.virt.libvirt.vif [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-05T10:03:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPetdIzX/DmYbuho/tXYoT+1fIe7+15KwHkqrQ2Jxm3DnDcJjrE6cq7QOdR7SpvKf/EdYyjCR4NQsyAcA0uFCUjYiFoXcP0oy/CffHrzk3+7jJw6fwvaC/fOGojbc79jbA==',key_name='tempest-keypair-1153912512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d385dfb4a744527807f14f2c315ebb6',ramdisk_id='',reservation_id='r-mjkjk40f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-339982464',owner_user_name='tempest-ServersV294TestFqdnHostnames-339982464-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-05T10:03:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d653613d543e463ab1cad06b2f955cc8',uuid=93bc594f-2d55-4daf-8d7f-ff1682a13ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.753 2 DEBUG nova.network.os_vif_util [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Converting VIF {"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.754 2 DEBUG nova.network.os_vif_util [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.755 2 DEBUG os_vif [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd48131a7-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd48131a7-f3, col_values=(('external_ids', {'iface-id': 'd48131a7-f387-4e0a-975d-d2f8cc362d7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:bd:33', 'vm-uuid': '93bc594f-2d55-4daf-8d7f-ff1682a13ddf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.771 2 INFO os_vif [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3')#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.819 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.819 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.819 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] No VIF found with MAC fa:16:3e:a9:bd:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.820 2 INFO nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Using config drive#033[00m Oct 5 06:03:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:03:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.844 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:17 localhost systemd[1]: tmp-crun.9Uc3k8.mount: Deactivated successfully. Oct 5 06:03:17 localhost podman[324955]: 2025-10-05 10:03:17.929481585 +0000 UTC m=+0.092284884 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:03:17 localhost podman[324953]: 2025-10-05 10:03:17.89930759 +0000 UTC m=+0.066694574 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:03:17 localhost podman[324955]: 2025-10-05 10:03:17.961104929 +0000 UTC m=+0.123908228 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:17 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.980 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.981 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11617MB free_disk=41.637908935546875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.981 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:17 localhost nova_compute[297130]: 2025-10-05 10:03:17.981 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:17 localhost podman[324953]: 2025-10-05 10:03:17.983220984 +0000 UTC m=+0.150607958 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2) Oct 5 06:03:17 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.015 2 INFO nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Creating config drive at /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/disk.config#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.019 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylo_eqrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.062 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Instance 93bc594f-2d55-4daf-8d7f-ff1682a13ddf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.062 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.063 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.101 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.136 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylo_eqrp" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.162 2 DEBUG nova.storage.rbd_utils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] rbd image 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.166 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/disk.config 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.363 2 DEBUG oslo_concurrency.processutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/disk.config 93bc594f-2d55-4daf-8d7f-ff1682a13ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.365 2 INFO nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Deleting local config drive /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf/disk.config because it was imported into RBD.#033[00m Oct 5 06:03:18 localhost kernel: device tapd48131a7-f3 entered promiscuous mode Oct 5 06:03:18 localhost ovn_controller[157556]: 2025-10-05T10:03:18Z|00076|binding|INFO|Claiming lport d48131a7-f387-4e0a-975d-d2f8cc362d7e for this chassis. Oct 5 06:03:18 localhost ovn_controller[157556]: 2025-10-05T10:03:18Z|00077|binding|INFO|d48131a7-f387-4e0a-975d-d2f8cc362d7e: Claiming fa:16:3e:a9:bd:33 10.100.0.13 Oct 5 06:03:18 localhost NetworkManager[5970]: [1759658598.4144] manager: (tapd48131a7-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/20) Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost systemd-udevd[325084]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:03:18 localhost ovn_controller[157556]: 2025-10-05T10:03:18Z|00078|binding|INFO|Setting lport d48131a7-f387-4e0a-975d-d2f8cc362d7e ovn-installed in OVS Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost ovn_controller[157556]: 2025-10-05T10:03:18Z|00079|binding|INFO|Setting lport d48131a7-f387-4e0a-975d-d2f8cc362d7e up in Southbound Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.434 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:bd:33 10.100.0.13'], port_security=['fa:16:3e:a9:bd:33 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '93bc594f-2d55-4daf-8d7f-ff1682a13ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d385dfb4a744527807f14f2c315ebb6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '18162d23-56f3-4a7e-93c2-8a3429bcf8f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54c0875b-2655-475c-808c-45277084df2c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=d48131a7-f387-4e0a-975d-d2f8cc362d7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.437 163201 INFO neutron.agent.ovn.metadata.agent [-] Port d48131a7-f387-4e0a-975d-d2f8cc362d7e in datapath 7fbae8f2-abd5-4dc6-a4c4-731281ea7308 bound to our chassis#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.440 163201 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fbae8f2-abd5-4dc6-a4c4-731281ea7308#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost NetworkManager[5970]: [1759658598.4477] device (tapd48131a7-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Oct 5 06:03:18 localhost NetworkManager[5970]: [1759658598.4494] device (tapd48131a7-f3): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.455 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[a05f933a-0dc1-40a6-8c0a-97a5c10b9f27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.457 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fbae8f2-a1 in ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Oct 5 06:03:18 localhost systemd-machined[206743]: New machine qemu-2-instance-0000000a. Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.460 271895 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fbae8f2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.463 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[e050f75e-bf15-46b0-873e-f237ea9f5ab5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.464 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[68050b09-dba0-40da-9b30-b3c9bc673843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost systemd[1]: Started Virtual Machine qemu-2-instance-0000000a. Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.475 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[b48cb472-6fd8-408b-8796-647a0c8045ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.497 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3af5e689-9d78-404d-b072-a1f345d8b8d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2559165332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.527 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd3d55d-36e1-4441-9679-724764f3ad14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.533 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[1e442bb2-f18e-409b-830e-6d16e268b54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost NetworkManager[5970]: [1759658598.5369] manager: (tap7fbae8f2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/21) Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.540 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.546 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.562 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.570 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[0c80a4c5-f0b3-4fd0-8dae-fc02c68b3ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.573 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[7168fe37-f6e5-40e9-bf88-8406084a1aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.587 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.587 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.593 2 DEBUG nova.compute.manager [req-e52b4e84-7d61-47b8-bd6a-7ad4501374a5 req-ca42f895-4250-4f3d-8fed-0ec34a160afc 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Received event network-vif-plugged-d48131a7-f387-4e0a-975d-d2f8cc362d7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.594 2 DEBUG oslo_concurrency.lockutils [req-e52b4e84-7d61-47b8-bd6a-7ad4501374a5 req-ca42f895-4250-4f3d-8fed-0ec34a160afc 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.594 2 DEBUG oslo_concurrency.lockutils [req-e52b4e84-7d61-47b8-bd6a-7ad4501374a5 req-ca42f895-4250-4f3d-8fed-0ec34a160afc 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.595 2 DEBUG oslo_concurrency.lockutils [req-e52b4e84-7d61-47b8-bd6a-7ad4501374a5 req-ca42f895-4250-4f3d-8fed-0ec34a160afc 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.595 2 DEBUG nova.compute.manager [req-e52b4e84-7d61-47b8-bd6a-7ad4501374a5 req-ca42f895-4250-4f3d-8fed-0ec34a160afc 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Processing event network-vif-plugged-d48131a7-f387-4e0a-975d-d2f8cc362d7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Oct 5 06:03:18 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap7fbae8f2-a1: link becomes ready Oct 5 06:03:18 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap7fbae8f2-a0: link becomes ready Oct 5 06:03:18 localhost NetworkManager[5970]: [1759658598.5965] device (tap7fbae8f2-a0): carrier: link connected Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.599 323411 DEBUG oslo.privsep.daemon [-] privsep: reply[c4aab45f-f4aa-452f-aa17-1d59f8631df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.618 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[a52692e7-a34b-4fe9-8392-bfc74b45e476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fbae8f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:99:b0:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1207466, 'reachable_time': 23179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325136, 'error': None, 'target': 'ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.631 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b5c24e-bf31-4c0a-8b0d-ac111dc62402]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:b025'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1207466, 'tstamp': 1207466}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325139, 'error': None, 'target': 'ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.645 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3fca96b1-9007-4d85-aad8-b79915a4dfdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fbae8f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:99:b0:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1207466, 'reachable_time': 23179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325141, 'error': None, 'target': 'ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.671 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[531ddd87-bc50-4ba2-970d-5b1bffd46109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.717 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[20bff62a-5b6f-430e-8a92-48ca64c450ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.719 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fbae8f2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.720 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.721 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fbae8f2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost kernel: device tap7fbae8f2-a0 entered promiscuous mode Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.727 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fbae8f2-a0, col_values=(('external_ids', {'iface-id': '84694d96-5d66-48cd-82c8-b9dadf94c922'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost ovn_controller[157556]: 2025-10-05T10:03:18Z|00080|binding|INFO|Releasing lport 84694d96-5d66-48cd-82c8-b9dadf94c922 from this chassis (sb_readonly=0) Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.741 163201 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fbae8f2-abd5-4dc6-a4c4-731281ea7308.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fbae8f2-abd5-4dc6-a4c4-731281ea7308.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Oct 5 06:03:18 localhost nova_compute[297130]: 2025-10-05 10:03:18.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.743 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0afd26-4473-4e0c-b458-f50ab4c9993d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.744 163201 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: global Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: log /dev/log local0 debug Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: log-tag haproxy-metadata-proxy-7fbae8f2-abd5-4dc6-a4c4-731281ea7308 Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: user root Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: group root Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: maxconn 1024 Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: pidfile /var/lib/neutron/external/pids/7fbae8f2-abd5-4dc6-a4c4-731281ea7308.pid.haproxy Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: daemon Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: defaults Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: log global Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: mode http Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: option httplog Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: option dontlognull Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: option http-server-close Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: option forwardfor Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: retries 3 Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: timeout http-request 30s Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: timeout connect 30s Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: timeout client 32s Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: timeout server 32s Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: timeout http-keep-alive 30s Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: listen listener Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: bind 169.254.169.254:80 Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: server metadata /var/lib/neutron/metadata_proxy Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: http-request add-header X-OVN-Network-ID 7fbae8f2-abd5-4dc6-a4c4-731281ea7308 Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Oct 5 06:03:18 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:18.745 163201 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'env', 'PROCESS_TAG=haproxy-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fbae8f2-abd5-4dc6-a4c4-731281ea7308.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Oct 5 06:03:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:03:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/99346386' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:03:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:03:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/99346386' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:03:19 localhost podman[325200]: Oct 5 06:03:19 localhost podman[325200]: 2025-10-05 10:03:19.127318898 +0000 UTC m=+0.088625164 container create 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:19 localhost systemd[1]: Started libpod-conmon-66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99.scope. Oct 5 06:03:19 localhost podman[325200]: 2025-10-05 10:03:19.092211428 +0000 UTC m=+0.053517684 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Oct 5 06:03:19 localhost systemd[1]: Started libcrun container. Oct 5 06:03:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcee93ab366102b2211524d6f34bde6b3f88ba9782a004b1c87cb33e83fdefbb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.199 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.201 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.201 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] VM Started (Lifecycle Event)#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.206 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.209 2 INFO nova.virt.libvirt.driver [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Instance spawned successfully.#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.209 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Oct 5 06:03:19 localhost podman[325200]: 2025-10-05 10:03:19.21263398 +0000 UTC m=+0.173940226 container init 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.224 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.227 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:03:19 localhost podman[325200]: 2025-10-05 10:03:19.230340893 +0000 UTC m=+0.191647169 container start 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.236 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.237 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.237 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.237 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.238 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.238 2 DEBUG nova.virt.libvirt.driver [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.244 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.244 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.245 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] VM Paused (Lifecycle Event)#033[00m Oct 5 06:03:19 localhost neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325215]: [NOTICE] (325219) : New worker (325221) forked Oct 5 06:03:19 localhost neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325215]: [NOTICE] (325219) : Loading success. Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.264 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.267 2 DEBUG nova.virt.driver [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.267 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] VM Resumed (Lifecycle Event)#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.282 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.284 2 DEBUG nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.301 2 INFO nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Took 5.10 seconds to spawn the instance on the hypervisor.#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.302 2 DEBUG nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.303 2 INFO nova.compute.manager [None req-a6cdddba-662f-4f1a-8f32-20059756a6e5 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.353 2 INFO nova.compute.manager [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Took 6.13 seconds to build instance.#033[00m Oct 5 06:03:19 localhost nova_compute[297130]: 2025-10-05 10:03:19.373 2 DEBUG oslo_concurrency.lockutils [None req-007b9dd9-858a-4fdf-818c-a5235e42ef11 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 6.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 123 op/s Oct 5 06:03:20 localhost systemd[1]: tmp-crun.A7iyzj.mount: Deactivated successfully. Oct 5 06:03:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:20.401 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:20.402 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:20.403 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:20 localhost nova_compute[297130]: 2025-10-05 10:03:20.651 2 DEBUG nova.compute.manager [req-af3800a6-3c27-4b79-8529-234c54fde149 req-8327e143-b696-4ebb-adfb-07f6d6336888 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Received event network-vif-plugged-d48131a7-f387-4e0a-975d-d2f8cc362d7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:03:20 localhost nova_compute[297130]: 2025-10-05 10:03:20.652 2 DEBUG oslo_concurrency.lockutils [req-af3800a6-3c27-4b79-8529-234c54fde149 req-8327e143-b696-4ebb-adfb-07f6d6336888 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:20 localhost nova_compute[297130]: 2025-10-05 10:03:20.652 2 DEBUG oslo_concurrency.lockutils [req-af3800a6-3c27-4b79-8529-234c54fde149 req-8327e143-b696-4ebb-adfb-07f6d6336888 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:20 localhost nova_compute[297130]: 2025-10-05 10:03:20.653 2 DEBUG oslo_concurrency.lockutils [req-af3800a6-3c27-4b79-8529-234c54fde149 req-8327e143-b696-4ebb-adfb-07f6d6336888 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:20 localhost nova_compute[297130]: 2025-10-05 10:03:20.653 2 DEBUG nova.compute.manager [req-af3800a6-3c27-4b79-8529-234c54fde149 req-8327e143-b696-4ebb-adfb-07f6d6336888 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] No waiting events found dispatching network-vif-plugged-d48131a7-f387-4e0a-975d-d2f8cc362d7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Oct 5 06:03:20 localhost nova_compute[297130]: 2025-10-05 10:03:20.653 2 WARNING nova.compute.manager [req-af3800a6-3c27-4b79-8529-234c54fde149 req-8327e143-b696-4ebb-adfb-07f6d6336888 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Received unexpected event network-vif-plugged-d48131a7-f387-4e0a-975d-d2f8cc362d7e for instance with vm_state active and task_state None.#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.411 2 DEBUG nova.compute.manager [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Received event network-changed-d48131a7-f387-4e0a-975d-d2f8cc362d7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.411 2 DEBUG nova.compute.manager [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Refreshing instance network info cache due to event network-changed-d48131a7-f387-4e0a-975d-d2f8cc362d7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.412 2 DEBUG oslo_concurrency.lockutils [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquiring lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.412 2 DEBUG oslo_concurrency.lockutils [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Acquired lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.412 2 DEBUG nova.network.neutron [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Refreshing network info cache for port d48131a7-f387-4e0a-975d-d2f8cc362d7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Oct 5 06:03:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 123 op/s Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.936 2 DEBUG nova.network.neutron [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updated VIF entry in instance network info cache for port d48131a7-f387-4e0a-975d-d2f8cc362d7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.936 2 DEBUG nova.network.neutron [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updating instance_info_cache with network_info: [{"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:21 localhost nova_compute[297130]: 2025-10-05 10:03:21.959 2 DEBUG oslo_concurrency.lockutils [req-216d64e3-90d8-4604-bbc5-2bde95a9d57c req-aec5493e-bed2-4586-a09e-7c3afba06f98 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] Releasing lock "refresh_cache-93bc594f-2d55-4daf-8d7f-ff1682a13ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Oct 5 06:03:22 localhost nova_compute[297130]: 2025-10-05 10:03:22.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:22 localhost nova_compute[297130]: 2025-10-05 10:03:22.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e99 e99: 6 total, 6 up, 6 in Oct 5 06:03:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 273 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 219 op/s Oct 5 06:03:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:03:23 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:03:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:03:23 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:03:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:03:23 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 10c02c34-ddb1-4786-94e8-9e78dcaebf09 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:03:23 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 10c02c34-ddb1-4786-94e8-9e78dcaebf09 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:03:23 localhost ceph-mgr[301363]: [progress INFO root] Completed event 10c02c34-ddb1-4786-94e8-9e78dcaebf09 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:03:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:03:23 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:03:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:24 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:03:24 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:03:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e100 e100: 6 total, 6 up, 6 in Oct 5 06:03:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 273 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 51 KiB/s wr, 153 op/s Oct 5 06:03:26 localhost podman[248157]: time="2025-10-05T10:03:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:03:26 localhost podman[248157]: @ - - [05/Oct/2025:10:03:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147504 "" "Go-http-client/1.1" Oct 5 06:03:26 localhost podman[248157]: @ - - [05/Oct/2025:10:03:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19798 "" "Go-http-client/1.1" Oct 5 06:03:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e101 e101: 6 total, 6 up, 6 in Oct 5 06:03:26 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:03:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:03:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 378 op/s Oct 5 06:03:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:03:27 localhost nova_compute[297130]: 2025-10-05 10:03:27.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:27 localhost nova_compute[297130]: 2025-10-05 10:03:27.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:28 localhost nova_compute[297130]: 2025-10-05 10:03:28.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:28.850 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:28.852 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:03:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 171 op/s Oct 5 06:03:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:03:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:03:29 localhost podman[325316]: 2025-10-05 10:03:29.928326811 +0000 UTC m=+0.087359088 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 06:03:29 localhost podman[325316]: 2025-10-05 10:03:29.939032524 +0000 UTC m=+0.098064831 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:29 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:03:30 localhost systemd[1]: tmp-crun.3n49FU.mount: Deactivated successfully. Oct 5 06:03:30 localhost podman[325317]: 2025-10-05 10:03:30.049061801 +0000 UTC m=+0.205622371 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:03:30 localhost podman[325317]: 2025-10-05 10:03:30.064874364 +0000 UTC m=+0.221434964 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:03:30 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:03:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e102 e102: 6 total, 6 up, 6 in Oct 5 06:03:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 173 op/s Oct 5 06:03:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:03:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:31.854 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:31 localhost podman[325359]: 2025-10-05 10:03:31.929792142 +0000 UTC m=+0.093642272 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible) Oct 5 06:03:31 localhost podman[325359]: 2025-10-05 10:03:31.963530714 +0000 UTC m=+0.127380824 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git) Oct 5 06:03:31 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:03:32 localhost ovn_controller[157556]: 2025-10-05T10:03:32Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:bd:33 10.100.0.13 Oct 5 06:03:32 localhost ovn_controller[157556]: 2025-10-05T10:03:32Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:bd:33 10.100.0.13 Oct 5 06:03:32 localhost nova_compute[297130]: 2025-10-05 10:03:32.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:32 localhost nova_compute[297130]: 2025-10-05 10:03:32.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:03:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:03:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:03:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:03:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:03:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 379 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 15 MiB/s wr, 363 op/s Oct 5 06:03:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e103 e103: 6 total, 6 up, 6 in Oct 5 06:03:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 379 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.4 MiB/s rd, 8.9 MiB/s wr, 233 op/s Oct 5 06:03:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 306 MiB data, 1023 MiB used, 41 GiB / 42 GiB avail; 8.9 MiB/s rd, 9.0 MiB/s wr, 386 op/s Oct 5 06:03:37 localhost nova_compute[297130]: 2025-10-05 10:03:37.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:37 localhost nova_compute[297130]: 2025-10-05 10:03:37.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:03:37 localhost podman[325377]: 2025-10-05 10:03:37.919948725 +0000 UTC m=+0.088318926 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:03:37 localhost podman[325377]: 2025-10-05 10:03:37.955308382 +0000 UTC m=+0.123678593 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Oct 5 06:03:37 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:03:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 306 MiB data, 1023 MiB used, 41 GiB / 42 GiB avail; 8.8 MiB/s rd, 8.9 MiB/s wr, 382 op/s Oct 5 06:03:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e104 e104: 6 total, 6 up, 6 in Oct 5 06:03:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 306 MiB data, 1023 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 158 KiB/s wr, 153 op/s Oct 5 06:03:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:03:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:03:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:03:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:03:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:03:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:03:42 localhost nova_compute[297130]: 2025-10-05 10:03:42.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:42 localhost nova_compute[297130]: 2025-10-05 10:03:42.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 178 KiB/s wr, 201 op/s Oct 5 06:03:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:03:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:03:44 localhost podman[325396]: 2025-10-05 10:03:44.915037597 +0000 UTC m=+0.084115711 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:44 localhost podman[325396]: 2025-10-05 10:03:44.928076984 +0000 UTC m=+0.097155058 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:03:44 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:03:45 localhost systemd[1]: tmp-crun.72Ud6W.mount: Deactivated successfully. Oct 5 06:03:45 localhost podman[325397]: 2025-10-05 10:03:45.030292147 +0000 UTC m=+0.193697406 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:03:45 localhost podman[325397]: 2025-10-05 10:03:45.039837589 +0000 UTC m=+0.203242838 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:03:45 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:03:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 143 KiB/s wr, 161 op/s Oct 5 06:03:46 localhost openstack_network_exporter[250246]: ERROR 10:03:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:46 localhost openstack_network_exporter[250246]: ERROR 10:03:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:03:46 localhost openstack_network_exporter[250246]: ERROR 10:03:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:03:46 localhost openstack_network_exporter[250246]: ERROR 10:03:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:03:46 localhost openstack_network_exporter[250246]: Oct 5 06:03:46 localhost openstack_network_exporter[250246]: ERROR 10:03:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:03:46 localhost openstack_network_exporter[250246]: Oct 5 06:03:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 189 KiB/s rd, 17 KiB/s wr, 38 op/s Oct 5 06:03:47 localhost nova_compute[297130]: 2025-10-05 10:03:47.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:47 localhost nova_compute[297130]: 2025-10-05 10:03:47.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e105 e105: 6 total, 6 up, 6 in Oct 5 06:03:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:03:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:03:48 localhost podman[325438]: 2025-10-05 10:03:48.917503304 +0000 UTC m=+0.083148904 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0) Oct 5 06:03:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:48 localhost podman[325438]: 2025-10-05 10:03:48.952814548 +0000 UTC m=+0.118460118 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:03:48 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:03:48 localhost podman[325439]: 2025-10-05 10:03:48.975166799 +0000 UTC m=+0.137856979 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:03:49 localhost podman[325439]: 2025-10-05 10:03:49.03958109 +0000 UTC m=+0.202271280 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:03:49 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:03:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 233 KiB/s rd, 21 KiB/s wr, 47 op/s Oct 5 06:03:51 localhost neutron_sriov_agent[264647]: 2025-10-05 10:03:51.436 2 INFO neutron.agent.securitygroups_rpc [None req-2785e779-2925-4423-b26e-6eb40ca7212b f63fee7c8d0d4b7b9ec136ffedafd342 23d0921d70724e3aab0ac10fdc837c26 - - default default] Security group member updated ['d459832e-70ec-4fc9-937f-0daa53e0fda7']#033[00m Oct 5 06:03:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 189 KiB/s rd, 17 KiB/s wr, 38 op/s Oct 5 06:03:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:52.002 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:03:51Z, description=, device_id=94f96329-2554-491c-92fa-1f4d4bf17308, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3d8d8bc8-876c-413b-b07b-eeb5a0c16bbb, ip_allocation=immediate, mac_address=fa:16:3e:f1:92:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=880, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:03:51Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:03:52 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:03:52 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:03:52 localhost podman[325496]: 2025-10-05 10:03:52.204876423 +0000 UTC m=+0.048001573 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:03:52 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:03:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:03:52.449 271653 INFO neutron.agent.dhcp.agent [None req-53175d03-7acb-41e7-bcc0-1b10d3858c24 - - - - - -] DHCP configuration for ports {'3d8d8bc8-876c-413b-b07b-eeb5a0c16bbb'} is completed#033[00m Oct 5 06:03:52 localhost ovn_controller[157556]: 2025-10-05T10:03:52Z|00081|binding|INFO|Releasing lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 from this chassis (sb_readonly=0) Oct 5 06:03:52 localhost kernel: device tap1eb1958a-da left promiscuous mode Oct 5 06:03:52 localhost ovn_controller[157556]: 2025-10-05T10:03:52Z|00082|binding|INFO|Setting lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 down in Southbound Oct 5 06:03:52 localhost nova_compute[297130]: 2025-10-05 10:03:52.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:52.760 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-cda0aa48-2690-46e0-99f3-e1922fca64be', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cda0aa48-2690-46e0-99f3-e1922fca64be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ec7882f-4ab2-4945-a460-196597f602b5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1eb1958a-da53-4c8f-aea8-41e19bfe5601) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:52.762 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 1eb1958a-da53-4c8f-aea8-41e19bfe5601 in datapath cda0aa48-2690-46e0-99f3-e1922fca64be unbound from our chassis#033[00m Oct 5 06:03:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:52.765 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cda0aa48-2690-46e0-99f3-e1922fca64be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:03:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:52.766 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[78b0039b-1896-4793-b99c-9202459bb495]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:52 localhost nova_compute[297130]: 2025-10-05 10:03:52.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:52 localhost nova_compute[297130]: 2025-10-05 10:03:52.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:52 localhost nova_compute[297130]: 2025-10-05 10:03:52.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:52 localhost nova_compute[297130]: 2025-10-05 10:03:52.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e106 e106: 6 total, 6 up, 6 in Oct 5 06:03:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 29 op/s Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:53.654 163299 DEBUG eventlet.wsgi.server [-] (163299) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:53.656 163299 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: Accept: */*#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: Connection: close#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: Content-Type: text/plain#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: Host: 169.254.169.254#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: User-Agent: curl/7.84.0#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: X-Forwarded-For: 10.100.0.13#015 Oct 5 06:03:53 localhost ovn_metadata_agent[163196]: X-Ovn-Network-Id: 7fbae8f2-abd5-4dc6-a4c4-731281ea7308 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Oct 5 06:03:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.296 163299 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.297 163299 INFO eventlet.wsgi.server [-] 10.100.0.13, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1673 time: 0.6412055#033[00m Oct 5 06:03:54 localhost haproxy-metadata-proxy-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325221]: 10.100.0.13:40760 [05/Oct/2025:10:03:53.653] listener listener/metadata 0/0/0/643/643 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.456 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.458 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.459 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.459 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.460 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.461 2 INFO nova.compute.manager [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Terminating instance#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.463 2 DEBUG nova.compute.manager [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Oct 5 06:03:54 localhost kernel: device tapd48131a7-f3 left promiscuous mode Oct 5 06:03:54 localhost NetworkManager[5970]: [1759658634.5437] device (tapd48131a7-f3): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Oct 5 06:03:54 localhost ovn_controller[157556]: 2025-10-05T10:03:54Z|00083|binding|INFO|Releasing lport d48131a7-f387-4e0a-975d-d2f8cc362d7e from this chassis (sb_readonly=0) Oct 5 06:03:54 localhost ovn_controller[157556]: 2025-10-05T10:03:54Z|00084|binding|INFO|Setting lport d48131a7-f387-4e0a-975d-d2f8cc362d7e down in Southbound Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost ovn_controller[157556]: 2025-10-05T10:03:54Z|00085|binding|INFO|Removing iface tapd48131a7-f3 ovn-installed in OVS Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.576 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:bd:33 10.100.0.13'], port_security=['fa:16:3e:a9:bd:33 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '93bc594f-2d55-4daf-8d7f-ff1682a13ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d385dfb4a744527807f14f2c315ebb6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '18162d23-56f3-4a7e-93c2-8a3429bcf8f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=54c0875b-2655-475c-808c-45277084df2c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=d48131a7-f387-4e0a-975d-d2f8cc362d7e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.579 163201 INFO neutron.agent.ovn.metadata.agent [-] Port d48131a7-f387-4e0a-975d-d2f8cc362d7e in datapath 7fbae8f2-abd5-4dc6-a4c4-731281ea7308 unbound from our chassis#033[00m Oct 5 06:03:54 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d0000000a.scope: Deactivated successfully. Oct 5 06:03:54 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d0000000a.scope: Consumed 14.557s CPU time. Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.581 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fbae8f2-abd5-4dc6-a4c4-731281ea7308, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.582 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[10c28886-9a90-4a9a-8d9d-cc18f243ce1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.584 163201 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308 namespace which is not needed anymore#033[00m Oct 5 06:03:54 localhost systemd-machined[206743]: Machine qemu-2-instance-0000000a terminated. Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.711 2 INFO nova.virt.libvirt.driver [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Instance destroyed successfully.#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.712 2 DEBUG nova.objects.instance [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lazy-loading 'resources' on Instance uuid 93bc594f-2d55-4daf-8d7f-ff1682a13ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.732 2 DEBUG nova.virt.libvirt.vif [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-05T10:03:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005471152.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPetdIzX/DmYbuho/tXYoT+1fIe7+15KwHkqrQ2Jxm3DnDcJjrE6cq7QOdR7SpvKf/EdYyjCR4NQsyAcA0uFCUjYiFoXcP0oy/CffHrzk3+7jJw6fwvaC/fOGojbc79jbA==',key_name='tempest-keypair-1153912512',keypairs=,launch_index=0,launched_at=2025-10-05T10:03:19Z,launched_on='np0005471152.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005471152.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='8d385dfb4a744527807f14f2c315ebb6',ramdisk_id='',reservation_id='r-mjkjk40f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6b9a58ff-e5da-4693-8e9c-7ab12fb1a2da',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-339982464',owner_user_name='tempest-ServersV294TestFqdnHostnames-339982464-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-10-05T10:03:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d653613d543e463ab1cad06b2f955cc8',uuid=93bc594f-2d55-4daf-8d7f-ff1682a13ddf,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.733 2 DEBUG nova.network.os_vif_util [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Converting VIF {"id": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "address": "fa:16:3e:a9:bd:33", "network": {"id": "7fbae8f2-abd5-4dc6-a4c4-731281ea7308", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1994450218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "8d385dfb4a744527807f14f2c315ebb6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd48131a7-f3", "ovs_interfaceid": "d48131a7-f387-4e0a-975d-d2f8cc362d7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.735 2 DEBUG nova.network.os_vif_util [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.735 2 DEBUG os_vif [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd48131a7-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.744 2 INFO os_vif [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:bd:33,bridge_name='br-int',has_traffic_filtering=True,id=d48131a7-f387-4e0a-975d-d2f8cc362d7e,network=Network(7fbae8f2-abd5-4dc6-a4c4-731281ea7308),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd48131a7-f3')#033[00m Oct 5 06:03:54 localhost neutron_sriov_agent[264647]: 2025-10-05 10:03:54.772 2 INFO neutron.agent.securitygroups_rpc [None req-a49a3e7f-2856-44c5-9390-f6d62ab91c62 f63fee7c8d0d4b7b9ec136ffedafd342 23d0921d70724e3aab0ac10fdc837c26 - - default default] Security group member updated ['d459832e-70ec-4fc9-937f-0daa53e0fda7']#033[00m Oct 5 06:03:54 localhost neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325215]: [NOTICE] (325219) : haproxy version is 2.8.14-c23fe91 Oct 5 06:03:54 localhost neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325215]: [NOTICE] (325219) : path to executable is /usr/sbin/haproxy Oct 5 06:03:54 localhost neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325215]: [ALERT] (325219) : Current worker (325221) exited with code 143 (Terminated) Oct 5 06:03:54 localhost neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308[325215]: [WARNING] (325219) : All workers exited. Exiting... (0) Oct 5 06:03:54 localhost systemd[1]: libpod-66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99.scope: Deactivated successfully. Oct 5 06:03:54 localhost podman[325556]: 2025-10-05 10:03:54.797689198 +0000 UTC m=+0.085708404 container died 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:03:54 localhost systemd[1]: tmp-crun.Yxoexd.mount: Deactivated successfully. Oct 5 06:03:54 localhost systemd[1]: tmp-crun.b0jhJH.mount: Deactivated successfully. Oct 5 06:03:54 localhost podman[325556]: 2025-10-05 10:03:54.837967429 +0000 UTC m=+0.125986575 container cleanup 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:03:54 localhost podman[325587]: 2025-10-05 10:03:54.862583292 +0000 UTC m=+0.060834194 container cleanup 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:03:54 localhost systemd[1]: libpod-conmon-66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99.scope: Deactivated successfully. Oct 5 06:03:54 localhost podman[325605]: 2025-10-05 10:03:54.931964808 +0000 UTC m=+0.072986316 container remove 66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.935 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[ea932559-27d0-40d8-8fdc-4d921839b908]: (4, ('Sun Oct 5 10:03:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308 (66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99)\n66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99\nSun Oct 5 10:03:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308 (66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99)\n66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.937 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5770cd-3af5-44a5-bf86-aae712254ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.939 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fbae8f2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost kernel: device tap7fbae8f2-a0 left promiscuous mode Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:54.984 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[07c70d0e-02f8-48bd-b571-2409fe5ca3fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:54 localhost nova_compute[297130]: 2025-10-05 10:03:54.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:55.003 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[815a208a-adfb-483d-b9d0-7f9e0ae91120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:55.004 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[557a7a1b-befb-4b29-9296-f6d14fbf8f31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:55.027 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[10496452-5dd1-46d5-9cb3-b1a51555ccb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1207459, 'reachable_time': 21376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325623, 'error': None, 'target': 'ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:55.029 163334 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fbae8f2-abd5-4dc6-a4c4-731281ea7308 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Oct 5 06:03:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:03:55.030 163334 DEBUG oslo.privsep.daemon [-] privsep: reply[c1019773-16e8-4b18-a85d-32ea66ae8c1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.367 2 INFO nova.virt.libvirt.driver [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Deleting instance files /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf_del#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.368 2 INFO nova.virt.libvirt.driver [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Deletion of /var/lib/nova/instances/93bc594f-2d55-4daf-8d7f-ff1682a13ddf_del complete#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.452 2 DEBUG nova.virt.libvirt.host [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.453 2 INFO nova.virt.libvirt.host [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] UEFI support detected#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.455 2 INFO nova.compute.manager [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.456 2 DEBUG oslo.service.loopingcall [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.456 2 DEBUG nova.compute.manager [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Oct 5 06:03:55 localhost nova_compute[297130]: 2025-10-05 10:03:55.457 2 DEBUG nova.network.neutron [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Oct 5 06:03:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 225 MiB data, 885 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 29 op/s Oct 5 06:03:55 localhost systemd[1]: var-lib-containers-storage-overlay-dcee93ab366102b2211524d6f34bde6b3f88ba9782a004b1c87cb33e83fdefbb-merged.mount: Deactivated successfully. Oct 5 06:03:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66f967214de307d4de84ac31022ec7358c9e222116f05485bc5d18f023e27b99-userdata-shm.mount: Deactivated successfully. Oct 5 06:03:55 localhost systemd[1]: run-netns-ovnmeta\x2d7fbae8f2\x2dabd5\x2d4dc6\x2da4c4\x2d731281ea7308.mount: Deactivated successfully. Oct 5 06:03:56 localhost podman[248157]: time="2025-10-05T10:03:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:03:56 localhost podman[248157]: @ - - [05/Oct/2025:10:03:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146317 "" "Go-http-client/1.1" Oct 5 06:03:56 localhost podman[248157]: @ - - [05/Oct/2025:10:03:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19325 "" "Go-http-client/1.1" Oct 5 06:03:56 localhost neutron_sriov_agent[264647]: 2025-10-05 10:03:56.362 2 INFO neutron.agent.securitygroups_rpc [req-cc241a83-bf1b-4563-99d8-792df22de69f req-7c539352-7cfc-4988-848a-55f30b42edc2 d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Security group member updated ['18162d23-56f3-4a7e-93c2-8a3429bcf8f3']#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.448 2 DEBUG nova.network.neutron [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.465 2 INFO nova.compute.manager [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Took 1.01 seconds to deallocate network for instance.#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.483 2 DEBUG nova.compute.manager [req-91c8b324-7003-4bbb-b2ea-b725e106cba1 req-7392a8e8-e230-4715-bfa9-fc7368340b84 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Received event network-vif-deleted-d48131a7-f387-4e0a-975d-d2f8cc362d7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.484 2 INFO nova.compute.manager [req-91c8b324-7003-4bbb-b2ea-b725e106cba1 req-7392a8e8-e230-4715-bfa9-fc7368340b84 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Neutron deleted interface d48131a7-f387-4e0a-975d-d2f8cc362d7e; detaching it from the instance and deleting it from the info cache#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.484 2 DEBUG nova.network.neutron [req-91c8b324-7003-4bbb-b2ea-b725e106cba1 req-7392a8e8-e230-4715-bfa9-fc7368340b84 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.508 2 DEBUG nova.compute.manager [req-91c8b324-7003-4bbb-b2ea-b725e106cba1 req-7392a8e8-e230-4715-bfa9-fc7368340b84 89e76f8d8a704047acc0434d9b9f95ed ffbb1c514d6a4f40a7f8a9f769bc781a - - default default] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Detach interface failed, port_id=d48131a7-f387-4e0a-975d-d2f8cc362d7e, reason: Instance 93bc594f-2d55-4daf-8d7f-ff1682a13ddf could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.514 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.515 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:03:56 localhost nova_compute[297130]: 2025-10-05 10:03:56.603 2 DEBUG oslo_concurrency.processutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:03:57 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:03:57 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3481042270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.068 2 DEBUG oslo_concurrency.processutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.076 2 DEBUG nova.compute.provider_tree [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.093 2 DEBUG nova.scheduler.client.report [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.123 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.158 2 INFO nova.scheduler.client.report [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Deleted allocations for instance 93bc594f-2d55-4daf-8d7f-ff1682a13ddf#033[00m Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.222 2 DEBUG oslo_concurrency.lockutils [None req-cc241a83-bf1b-4563-99d8-792df22de69f d653613d543e463ab1cad06b2f955cc8 8d385dfb4a744527807f14f2c315ebb6 - - default default] Lock "93bc594f-2d55-4daf-8d7f-ff1682a13ddf" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:03:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 6.3 KiB/s wr, 83 op/s Oct 5 06:03:57 localhost nova_compute[297130]: 2025-10-05 10:03:57.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:58 localhost nova_compute[297130]: 2025-10-05 10:03:58.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:03:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:03:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 5.6 KiB/s wr, 73 op/s Oct 5 06:03:59 localhost nova_compute[297130]: 2025-10-05 10:03:59.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:04:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:04:00 localhost systemd[1]: tmp-crun.dPCW3q.mount: Deactivated successfully. Oct 5 06:04:00 localhost podman[325649]: 2025-10-05 10:04:00.931138243 +0000 UTC m=+0.095902732 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:04:00 localhost podman[325648]: 2025-10-05 10:04:00.972611227 +0000 UTC m=+0.137622863 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:00 localhost podman[325648]: 2025-10-05 10:04:00.983726591 +0000 UTC m=+0.148738257 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:04:00 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:04:01 localhost podman[325649]: 2025-10-05 10:04:01.062195976 +0000 UTC m=+0.226960465 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:04:01 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:04:01 localhost dnsmasq[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:04:01 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:01 localhost podman[325707]: 2025-10-05 10:04:01.222789975 +0000 UTC m=+0.063494215 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:04:01 localhost dnsmasq-dhcp[271991]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent [None req-2422a14a-f605-4bd7-bc1d-3dde72120a4b - - - - - -] Unable to reload_allocations dhcp for cda0aa48-2690-46e0-99f3-e1922fca64be.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap1eb1958a-da not found in namespace qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be. Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent return fut.result() Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent raise self._exception Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap1eb1958a-da not found in namespace qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be. Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.269 271653 ERROR neutron.agent.dhcp.agent #033[00m Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.277 271653 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.450 271653 INFO neutron.agent.dhcp.agent [None req-96d3201a-f45e-4a1e-bc66-523ae5e44595 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.451 271653 INFO neutron.agent.dhcp.agent [-] Starting network cda0aa48-2690-46e0-99f3-e1922fca64be dhcp configuration#033[00m Oct 5 06:04:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 e107: 6 total, 6 up, 6 in Oct 5 06:04:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 61 op/s Oct 5 06:04:01 localhost dnsmasq[271991]: exiting on receipt of SIGTERM Oct 5 06:04:01 localhost podman[325738]: 2025-10-05 10:04:01.612514088 +0000 UTC m=+0.061128821 container kill a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 06:04:01 localhost systemd[1]: libpod-a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3.scope: Deactivated successfully. Oct 5 06:04:01 localhost podman[325751]: 2025-10-05 10:04:01.694199832 +0000 UTC m=+0.065396389 container died a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:04:01 localhost podman[325751]: 2025-10-05 10:04:01.723845982 +0000 UTC m=+0.095042489 container cleanup a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:01 localhost systemd[1]: libpod-conmon-a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3.scope: Deactivated successfully. Oct 5 06:04:01 localhost podman[325753]: 2025-10-05 10:04:01.770726843 +0000 UTC m=+0.132771550 container remove a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:01.820 271653 INFO neutron.agent.linux.ip_lib [-] Device tap1eb1958a-da cannot be used as it has no MAC address#033[00m Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost kernel: device tap1eb1958a-da entered promiscuous mode Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost ovn_controller[157556]: 2025-10-05T10:04:01Z|00086|binding|INFO|Claiming lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 for this chassis. Oct 5 06:04:01 localhost NetworkManager[5970]: [1759658641.8540] manager: (tap1eb1958a-da): new Generic device (/org/freedesktop/NetworkManager/Devices/22) Oct 5 06:04:01 localhost ovn_controller[157556]: 2025-10-05T10:04:01Z|00087|binding|INFO|1eb1958a-da53-4c8f-aea8-41e19bfe5601: Claiming unknown Oct 5 06:04:01 localhost systemd-udevd[325786]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:01.863 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-cda0aa48-2690-46e0-99f3-e1922fca64be', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cda0aa48-2690-46e0-99f3-e1922fca64be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b36437b65444bcdac75beef77b6981e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ec7882f-4ab2-4945-a460-196597f602b5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=1eb1958a-da53-4c8f-aea8-41e19bfe5601) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:01 localhost ovn_controller[157556]: 2025-10-05T10:04:01Z|00088|binding|INFO|Setting lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 up in Southbound Oct 5 06:04:01 localhost ovn_controller[157556]: 2025-10-05T10:04:01Z|00089|binding|INFO|Setting lport 1eb1958a-da53-4c8f-aea8-41e19bfe5601 ovn-installed in OVS Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:01.867 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 1eb1958a-da53-4c8f-aea8-41e19bfe5601 in datapath cda0aa48-2690-46e0-99f3-e1922fca64be bound to our chassis#033[00m Oct 5 06:04:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:01.869 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Port f8841526-3a20-41a8-89c0-62e4facfb943 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:01.870 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cda0aa48-2690-46e0-99f3-e1922fca64be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:04:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:01.871 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad1f7eb-f7fd-469e-949c-eb90764b7ade]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost systemd[1]: var-lib-containers-storage-overlay-6d47fc98d2566baffc148b2425b8ee9d1abe3404f528e542010b714b5b779d9c-merged.mount: Deactivated successfully. Oct 5 06:04:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a250ce86016af3bd7c7c39e1392bbe06a6dd8a70ee44cd700ccc1239b3dec1e3-userdata-shm.mount: Deactivated successfully. Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:01 localhost nova_compute[297130]: 2025-10-05 10:04:01.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:02 localhost nova_compute[297130]: 2025-10-05 10:04:02.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:02 localhost podman[325839]: Oct 5 06:04:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:04:02 localhost podman[325839]: 2025-10-05 10:04:02.842959813 +0000 UTC m=+0.123061115 container create 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:04:02 localhost podman[325839]: 2025-10-05 10:04:02.769117324 +0000 UTC m=+0.049218626 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:02 localhost systemd[1]: Started libpod-conmon-8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053.scope. Oct 5 06:04:02 localhost systemd[1]: tmp-crun.yjlIRY.mount: Deactivated successfully. Oct 5 06:04:02 localhost systemd[1]: Started libcrun container. Oct 5 06:04:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e495e5311936cc7b167207370f09d4af9405944a0ab3de6c01c4c729f370c43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:02 localhost systemd[1]: tmp-crun.QAP6Ci.mount: Deactivated successfully. Oct 5 06:04:02 localhost podman[325853]: 2025-10-05 10:04:02.93907207 +0000 UTC m=+0.093870118 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:04:02 localhost podman[325839]: 2025-10-05 10:04:02.941458325 +0000 UTC m=+0.221559607 container init 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:04:02 localhost podman[325839]: 2025-10-05 10:04:02.950544363 +0000 UTC m=+0.230645625 container start 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:02 localhost dnsmasq[325876]: started, version 2.85 cachesize 150 Oct 5 06:04:02 localhost dnsmasq[325876]: DNS service limited to local subnets Oct 5 06:04:02 localhost dnsmasq[325876]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:02 localhost dnsmasq[325876]: warning: no upstream servers configured Oct 5 06:04:02 localhost dnsmasq-dhcp[325876]: DHCP, static leases only on 192.168.122.0, lease time 1d Oct 5 06:04:02 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:04:02 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:02 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:03.009 271653 INFO neutron.agent.dhcp.agent [None req-32784c0c-8900-410e-90f3-7e7e9998f322 - - - - - -] Finished network cda0aa48-2690-46e0-99f3-e1922fca64be dhcp configuration#033[00m Oct 5 06:04:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:03.011 271653 INFO neutron.agent.dhcp.agent [None req-96d3201a-f45e-4a1e-bc66-523ae5e44595 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:04:03 localhost podman[325853]: 2025-10-05 10:04:03.023189709 +0000 UTC m=+0.177987707 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=) Oct 5 06:04:03 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:04:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:03.432 271653 INFO neutron.agent.dhcp.agent [None req-24b4e02c-34ef-499b-b986-162d27c2743e - - - - - -] DHCP configuration for ports {'220c66ec-c183-4fa3-847d-06fa876ccd15', '1eb1958a-da53-4c8f-aea8-41e19bfe5601', 'b9ee97af-ea88-430c-9c1c-aa81648e44db', '9a7a5af1-bb98-40ce-b1bc-45738e2a191a'} is completed#033[00m Oct 5 06:04:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 49 op/s Oct 5 06:04:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:04 localhost nova_compute[297130]: 2025-10-05 10:04:04.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 49 op/s Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.517233) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646517265, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2345, "num_deletes": 261, "total_data_size": 2986630, "memory_usage": 3044528, "flush_reason": "Manual Compaction"} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646528441, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 1936401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17070, "largest_seqno": 19409, "table_properties": {"data_size": 1928002, "index_size": 5036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18370, "raw_average_key_size": 20, "raw_value_size": 1910698, "raw_average_value_size": 2127, "num_data_blocks": 221, "num_entries": 898, "num_filter_entries": 898, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658490, "oldest_key_time": 1759658490, "file_creation_time": 1759658646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11260 microseconds, and 5240 cpu microseconds. Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.528490) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 1936401 bytes OK Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.528513) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.535588) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.535612) EVENT_LOG_v1 {"time_micros": 1759658646535605, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.535630) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2976122, prev total WAL file size 2976446, number of live WAL files 2. Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.537648) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303134' seq:72057594037927935, type:22 .. '6C6F676D0034323636' seq:0, type:0; will stop at (end) Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(1891KB)], [24(15MB)] Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646537713, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 18655642, "oldest_snapshot_seqno": -1} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12555 keys, 18483557 bytes, temperature: kUnknown Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646626954, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 18483557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18409942, "index_size": 41130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31429, "raw_key_size": 337850, "raw_average_key_size": 26, "raw_value_size": 18194178, "raw_average_value_size": 1449, "num_data_blocks": 1558, "num_entries": 12555, "num_filter_entries": 12555, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.627337) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 18483557 bytes Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.629337) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.8 rd, 206.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 15.9 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(19.2) write-amplify(9.5) OK, records in: 13094, records dropped: 539 output_compression: NoCompression Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.629406) EVENT_LOG_v1 {"time_micros": 1759658646629359, "job": 12, "event": "compaction_finished", "compaction_time_micros": 89356, "compaction_time_cpu_micros": 32463, "output_level": 6, "num_output_files": 1, "total_output_size": 18483557, "num_input_records": 13094, "num_output_records": 12555, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646629993, "job": 12, "event": "table_file_deletion", "file_number": 26} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658646633538, "job": 12, "event": "table_file_deletion", "file_number": 24} Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.537487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.633679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.633690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.633693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.633697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:06.633700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:07 localhost nova_compute[297130]: 2025-10-05 10:04:07.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:07 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:07.946 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:07 localhost nova_compute[297130]: 2025-10-05 10:04:07.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:07 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:07.948 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:04:08 localhost podman[325878]: 2025-10-05 10:04:08.915798531 +0000 UTC m=+0.081931410 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 06:04:08 localhost podman[325878]: 2025-10-05 10:04:08.925339863 +0000 UTC m=+0.091472692 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Oct 5 06:04:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:08 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:04:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:09 localhost nova_compute[297130]: 2025-10-05 10:04:09.705 2 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Oct 5 06:04:09 localhost nova_compute[297130]: 2025-10-05 10:04:09.706 2 INFO nova.compute.manager [-] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] VM Stopped (Lifecycle Event)#033[00m Oct 5 06:04:09 localhost nova_compute[297130]: 2025-10-05 10:04:09.737 2 DEBUG nova.compute.manager [None req-621df6e0-77b2-4ee5-a99f-0118a0b9c075 - - - - - -] [instance: 93bc594f-2d55-4daf-8d7f-ff1682a13ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Oct 5 06:04:09 localhost nova_compute[297130]: 2025-10-05 10:04:09.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:11.124 271653 INFO neutron.agent.linux.ip_lib [None req-8d56ae15-0567-4328-9a53-74b23f68cdb4 - - - - - -] Device tap513833a6-34 cannot be used as it has no MAC address#033[00m Oct 5 06:04:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:11.125 271653 INFO neutron.agent.linux.ip_lib [None req-8bdaf0aa-6df1-4ab1-a6f8-f54bac320ad2 - - - - - -] Device tap90ac5484-a7 cannot be used as it has no MAC address#033[00m Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost kernel: device tap513833a6-34 entered promiscuous mode Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00090|binding|INFO|Claiming lport 513833a6-34aa-4fb9-b56a-132fdb1b291b for this chassis. Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00091|binding|INFO|513833a6-34aa-4fb9-b56a-132fdb1b291b: Claiming unknown Oct 5 06:04:11 localhost NetworkManager[5970]: [1759658651.1610] manager: (tap513833a6-34): new Generic device (/org/freedesktop/NetworkManager/Devices/23) Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost systemd-udevd[325914]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00092|binding|INFO|Setting lport 513833a6-34aa-4fb9-b56a-132fdb1b291b ovn-installed in OVS Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost kernel: device tap90ac5484-a7 entered promiscuous mode Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00093|if_status|INFO|Dropped 2 log messages in last 1280 seconds (most recently, 1280 seconds ago) due to excessive rate Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00094|if_status|INFO|Not updating pb chassis for 90ac5484-a7ab-4f3a-ae39-413d08499a4c now as sb is readonly Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost NetworkManager[5970]: [1759658651.2110] manager: (tap90ac5484-a7): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00095|binding|INFO|Claiming lport 90ac5484-a7ab-4f3a-ae39-413d08499a4c for this chassis. Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00096|binding|INFO|90ac5484-a7ab-4f3a-ae39-413d08499a4c: Claiming unknown Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.307 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-3313983b-2e96-4a06-b17a-d6b215fac86b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3313983b-2e96-4a06-b17a-d6b215fac86b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4003abd2530843a7a9d79f2692bf3d99', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b8d3209-c2ee-4c78-bb04-61e78cad9fba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=513833a6-34aa-4fb9-b56a-132fdb1b291b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.309 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 513833a6-34aa-4fb9-b56a-132fdb1b291b in datapath 3313983b-2e96-4a06-b17a-d6b215fac86b bound to our chassis#033[00m Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.311 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3313983b-2e96-4a06-b17a-d6b215fac86b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.312 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb1b19f-3fb6-43ed-8507-7a7f0835d639]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00097|binding|INFO|Setting lport 513833a6-34aa-4fb9-b56a-132fdb1b291b up in Southbound Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00098|binding|INFO|Setting lport 90ac5484-a7ab-4f3a-ae39-413d08499a4c ovn-installed in OVS Oct 5 06:04:11 localhost nova_compute[297130]: 2025-10-05 10:04:11.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:11 localhost ovn_controller[157556]: 2025-10-05T10:04:11Z|00099|binding|INFO|Setting lport 90ac5484-a7ab-4f3a-ae39-413d08499a4c up in Southbound Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.428 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-a83b73df-1192-4d7e-96ed-794e48f8f131', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b73df-1192-4d7e-96ed-794e48f8f131', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '729ee41aeaa14c54bd5d5db84035013e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e61edcbe-2cf7-4227-9c15-1ada0dedc03c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=90ac5484-a7ab-4f3a-ae39-413d08499a4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.431 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 90ac5484-a7ab-4f3a-ae39-413d08499a4c in datapath a83b73df-1192-4d7e-96ed-794e48f8f131 unbound from our chassis#033[00m Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.432 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a83b73df-1192-4d7e-96ed-794e48f8f131 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:04:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:11.433 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[eb076523-f377-4fff-91b2-33c7555c3082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:04:11 Oct 5 06:04:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:04:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:04:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'manila_metadata', 'manila_data', 'images', 'vms'] Oct 5 06:04:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:04:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:04:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:04:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:04:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:04:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:04:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:04:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:04:12 localhost podman[326016]: Oct 5 06:04:12 localhost podman[326016]: 2025-10-05 10:04:12.353327436 +0000 UTC m=+0.096158370 container create 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:04:12 localhost podman[326016]: 2025-10-05 10:04:12.30590274 +0000 UTC m=+0.048733704 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:12 localhost systemd[1]: Started libpod-conmon-4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6.scope. Oct 5 06:04:12 localhost podman[326035]: Oct 5 06:04:12 localhost systemd[1]: Started libcrun container. Oct 5 06:04:12 localhost podman[326035]: 2025-10-05 10:04:12.425733865 +0000 UTC m=+0.103201972 container create 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:04:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1c3494ee37c54aeb570624f9330640946a15f828238cc6652798c2b9bffa03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:12 localhost podman[326016]: 2025-10-05 10:04:12.444715684 +0000 UTC m=+0.187546608 container init 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:04:12 localhost podman[326016]: 2025-10-05 10:04:12.452181228 +0000 UTC m=+0.195012152 container start 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:04:12 localhost dnsmasq[326055]: started, version 2.85 cachesize 150 Oct 5 06:04:12 localhost dnsmasq[326055]: DNS service limited to local subnets Oct 5 06:04:12 localhost dnsmasq[326055]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:12 localhost dnsmasq[326055]: warning: no upstream servers configured Oct 5 06:04:12 localhost dnsmasq-dhcp[326055]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:04:12 localhost dnsmasq[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/addn_hosts - 0 addresses Oct 5 06:04:12 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/host Oct 5 06:04:12 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/opts Oct 5 06:04:12 localhost systemd[1]: Started libpod-conmon-13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71.scope. Oct 5 06:04:12 localhost podman[326035]: 2025-10-05 10:04:12.380633472 +0000 UTC m=+0.058101579 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:04:12 localhost systemd[1]: Started libcrun container. Oct 5 06:04:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2531c07c5ac7da804f5b470861092946021c9b58242f9fcf654aeaec7be9b35b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:04:12 localhost podman[326035]: 2025-10-05 10:04:12.497815035 +0000 UTC m=+0.175283142 container init 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:04:12 localhost podman[326035]: 2025-10-05 10:04:12.50785116 +0000 UTC m=+0.185319277 container start 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:04:12 localhost dnsmasq[326060]: started, version 2.85 cachesize 150 Oct 5 06:04:12 localhost dnsmasq[326060]: DNS service limited to local subnets Oct 5 06:04:12 localhost dnsmasq[326060]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:04:12 localhost dnsmasq[326060]: warning: no upstream servers configured Oct 5 06:04:12 localhost dnsmasq-dhcp[326060]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:04:12 localhost dnsmasq[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/addn_hosts - 0 addresses Oct 5 06:04:12 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/host Oct 5 06:04:12 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/opts Oct 5 06:04:12 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:12.808 271653 INFO neutron.agent.dhcp.agent [None req-b15c37b4-7194-430c-9e5c-ccf65a4a2efc - - - - - -] DHCP configuration for ports {'02b3311e-a2a1-48a7-aaff-7ee4b948b5b1', '28ce0302-ef40-4c0d-826a-c3311283987f'} is completed#033[00m Oct 5 06:04:12 localhost nova_compute[297130]: 2025-10-05 10:04:12.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:13 localhost nova_compute[297130]: 2025-10-05 10:04:13.580 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:13 localhost nova_compute[297130]: 2025-10-05 10:04:13.581 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:13 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:13.951 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:04:14 localhost nova_compute[297130]: 2025-10-05 10:04:14.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:14 localhost nova_compute[297130]: 2025-10-05 10:04:14.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:14.490 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:13Z, description=, device_id=0eca3632-10b1-44ae-8606-2934e8ebaceb, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=728f8ed9-a537-48f4-94f5-362fa2532714, ip_allocation=immediate, mac_address=fa:16:3e:53:48:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1009, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:13Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:14 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:04:14 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:14 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:14 localhost podman[326078]: 2025-10-05 10:04:14.70687391 +0000 UTC m=+0.056852195 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:14 localhost nova_compute[297130]: 2025-10-05 10:04:14.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:14.875 271653 INFO neutron.agent.dhcp.agent [None req-4ad54ca0-5c5e-4496-8158-c3c3b5b37c96 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:13Z, description=, device_id=dd848696-1192-42dc-bdf5-98abbd49c528, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=393119f2-ac93-41e1-8554-23b51e86ad03, ip_allocation=immediate, mac_address=fa:16:3e:4c:8c:3e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1010, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:14Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:15.001 271653 INFO neutron.agent.dhcp.agent [None req-7de20cec-c648-448c-b51a-3404c1e653d4 - - - - - -] DHCP configuration for ports {'728f8ed9-a537-48f4-94f5-362fa2532714'} is completed#033[00m Oct 5 06:04:15 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:15 localhost podman[326115]: 2025-10-05 10:04:15.106983816 +0000 UTC m=+0.061728958 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:04:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:04:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:04:15 localhost systemd[1]: tmp-crun.4752xq.mount: Deactivated successfully. Oct 5 06:04:15 localhost podman[326128]: 2025-10-05 10:04:15.232002064 +0000 UTC m=+0.095748938 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd) Oct 5 06:04:15 localhost podman[326128]: 2025-10-05 10:04:15.268145922 +0000 UTC m=+0.131892806 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS) Oct 5 06:04:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:15.269 271653 INFO neutron.agent.dhcp.agent [None req-cbd9be2a-7137-4cab-8968-27803311284c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:14Z, description=, device_id=3594095b-1838-4628-b147-c16cee0fd774, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4626ec7a-d5ce-4b90-a93d-150807ed56aa, ip_allocation=immediate, mac_address=fa:16:3e:96:33:b2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1012, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:14Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:15 localhost nova_compute[297130]: 2025-10-05 10:04:15.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:15 localhost nova_compute[297130]: 2025-10-05 10:04:15.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:04:15 localhost nova_compute[297130]: 2025-10-05 10:04:15.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:04:15 localhost podman[326129]: 2025-10-05 10:04:15.278605598 +0000 UTC m=+0.138614970 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:04:15 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:04:15 localhost nova_compute[297130]: 2025-10-05 10:04:15.296 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:04:15 localhost podman[326129]: 2025-10-05 10:04:15.316288238 +0000 UTC m=+0.176297600 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:04:15 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:04:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:15.373 271653 INFO neutron.agent.dhcp.agent [None req-5f6011bd-4152-44f4-bcb7-a5277ab52a49 - - - - - -] DHCP configuration for ports {'393119f2-ac93-41e1-8554-23b51e86ad03'} is completed#033[00m Oct 5 06:04:15 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:04:15 localhost podman[326195]: 2025-10-05 10:04:15.529673071 +0000 UTC m=+0.064839754 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:15.814 271653 INFO neutron.agent.dhcp.agent [None req-9739760e-7811-4d47-a862-8c251b83583c - - - - - -] DHCP configuration for ports {'4626ec7a-d5ce-4b90-a93d-150807ed56aa'} is completed#033[00m Oct 5 06:04:16 localhost nova_compute[297130]: 2025-10-05 10:04:16.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:16 localhost nova_compute[297130]: 2025-10-05 10:04:16.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:16 localhost nova_compute[297130]: 2025-10-05 10:04:16.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:16 localhost openstack_network_exporter[250246]: ERROR 10:04:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:16 localhost openstack_network_exporter[250246]: ERROR 10:04:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:16 localhost openstack_network_exporter[250246]: ERROR 10:04:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:04:16 localhost openstack_network_exporter[250246]: ERROR 10:04:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:04:16 localhost openstack_network_exporter[250246]: Oct 5 06:04:16 localhost openstack_network_exporter[250246]: ERROR 10:04:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:04:16 localhost openstack_network_exporter[250246]: Oct 5 06:04:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:17 localhost nova_compute[297130]: 2025-10-05 10:04:17.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:18 localhost nova_compute[297130]: 2025-10-05 10:04:18.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:18 localhost nova_compute[297130]: 2025-10-05 10:04:18.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:18 localhost nova_compute[297130]: 2025-10-05 10:04:18.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:04:18 localhost nova_compute[297130]: 2025-10-05 10:04:18.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.731160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658731193, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 411, "num_deletes": 251, "total_data_size": 198513, "memory_usage": 206392, "flush_reason": "Manual Compaction"} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658734323, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 128573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19414, "largest_seqno": 19820, "table_properties": {"data_size": 126330, "index_size": 354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6044, "raw_average_key_size": 19, "raw_value_size": 121771, "raw_average_value_size": 394, "num_data_blocks": 16, "num_entries": 309, "num_filter_entries": 309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658646, "oldest_key_time": 1759658646, "file_creation_time": 1759658658, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 3213 microseconds, and 1122 cpu microseconds. Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.734370) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 128573 bytes OK Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.734390) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.736090) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.736114) EVENT_LOG_v1 {"time_micros": 1759658658736108, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.736128) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 195865, prev total WAL file size 195865, number of live WAL files 2. Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.736836) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(125KB)], [27(17MB)] Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658736953, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18612130, "oldest_snapshot_seqno": -1} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12349 keys, 16071105 bytes, temperature: kUnknown Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658833576, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 16071105, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16000926, "index_size": 38206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30917, "raw_key_size": 334026, "raw_average_key_size": 27, "raw_value_size": 15790855, "raw_average_value_size": 1278, "num_data_blocks": 1433, "num_entries": 12349, "num_filter_entries": 12349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658658, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.833974) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 16071105 bytes Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.835675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.4 rd, 166.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 17.6 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(269.8) write-amplify(125.0) OK, records in: 12864, records dropped: 515 output_compression: NoCompression Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.835705) EVENT_LOG_v1 {"time_micros": 1759658658835693, "job": 14, "event": "compaction_finished", "compaction_time_micros": 96721, "compaction_time_cpu_micros": 51589, "output_level": 6, "num_output_files": 1, "total_output_size": 16071105, "num_input_records": 12864, "num_output_records": 12349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658835867, "job": 14, "event": "table_file_deletion", "file_number": 29} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658658839156, "job": 14, "event": "table_file_deletion", "file_number": 27} Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.736637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.839185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.839190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.839193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.839196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:04:18.839199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:04:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:19 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:19.185 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:18Z, description=, device_id=0eca3632-10b1-44ae-8606-2934e8ebaceb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=28ec8a8c-b686-462a-bfae-04815b199410, ip_allocation=immediate, mac_address=fa:16:3e:77:13:cc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:08Z, description=, dns_domain=, id=a83b73df-1192-4d7e-96ed-794e48f8f131, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1483918899-network, port_security_enabled=True, project_id=729ee41aeaa14c54bd5d5db84035013e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7516, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=972, status=ACTIVE, subnets=['e5d92ee5-214f-4ae7-aaa0-a4e204c46d87'], tags=[], tenant_id=729ee41aeaa14c54bd5d5db84035013e, updated_at=2025-10-05T10:04:09Z, vlan_transparent=None, network_id=a83b73df-1192-4d7e-96ed-794e48f8f131, port_security_enabled=False, project_id=729ee41aeaa14c54bd5d5db84035013e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1022, status=DOWN, tags=[], tenant_id=729ee41aeaa14c54bd5d5db84035013e, updated_at=2025-10-05T10:04:18Z on network a83b73df-1192-4d7e-96ed-794e48f8f131#033[00m Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:04:19 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:19.274 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:18Z, description=, device_id=dd848696-1192-42dc-bdf5-98abbd49c528, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=447763c2-e3ed-4d66-8374-184253dfa825, ip_allocation=immediate, mac_address=fa:16:3e:42:54:aa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:08Z, description=, dns_domain=, id=3313983b-2e96-4a06-b17a-d6b215fac86b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-502275482-network, port_security_enabled=True, project_id=4003abd2530843a7a9d79f2692bf3d99, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47192, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=971, status=ACTIVE, subnets=['e10ca5b5-5795-4f3c-adb1-ecff31f292d1'], tags=[], tenant_id=4003abd2530843a7a9d79f2692bf3d99, updated_at=2025-10-05T10:04:09Z, vlan_transparent=None, network_id=3313983b-2e96-4a06-b17a-d6b215fac86b, port_security_enabled=False, project_id=4003abd2530843a7a9d79f2692bf3d99, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1023, status=DOWN, tags=[], tenant_id=4003abd2530843a7a9d79f2692bf3d99, updated_at=2025-10-05T10:04:18Z on network 3313983b-2e96-4a06-b17a-d6b215fac86b#033[00m Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.301 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.302 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.302 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.303 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.303 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:04:19 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:19.430 2 INFO neutron.agent.securitygroups_rpc [None req-b873fce1-f809-4b34-a034-ad5a5a62539e 39f5838e84b5470ca86bd1fe4d24e208 3f4120f15a704a6bbf6e983fddbe14b0 - - default default] Security group member updated ['0f68a87a-4623-4f46-8cab-ade6cefe7174']#033[00m Oct 5 06:04:19 localhost podman[326241]: 2025-10-05 10:04:19.488968368 +0000 UTC m=+0.094244517 container kill 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:04:19 localhost dnsmasq[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/addn_hosts - 1 addresses Oct 5 06:04:19 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/host Oct 5 06:04:19 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/opts Oct 5 06:04:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:04:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:04:19 localhost podman[326274]: 2025-10-05 10:04:19.537416862 +0000 UTC m=+0.071544807 container kill 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:04:19 localhost dnsmasq[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/addn_hosts - 1 addresses Oct 5 06:04:19 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/host Oct 5 06:04:19 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/opts Oct 5 06:04:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:19 localhost podman[326291]: 2025-10-05 10:04:19.661031441 +0000 UTC m=+0.134832946 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 06:04:19 localhost podman[326290]: 2025-10-05 10:04:19.620931025 +0000 UTC m=+0.101296150 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:04:19 localhost podman[326290]: 2025-10-05 10:04:19.704216972 +0000 UTC m=+0.184582047 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:19 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:04:19 localhost podman[326291]: 2025-10-05 10:04:19.71951971 +0000 UTC m=+0.193321215 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 06:04:19 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:04:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4189796569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:04:19 localhost nova_compute[297130]: 2025-10-05 10:04:19.782 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:04:19 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:19.839 271653 INFO neutron.agent.dhcp.agent [None req-825708e6-794e-4f8b-afbc-fd3102480fdd - - - - - -] DHCP configuration for ports {'28ec8a8c-b686-462a-bfae-04815b199410'} is completed#033[00m Oct 5 06:04:20 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:20.019 271653 INFO neutron.agent.dhcp.agent [None req-d9bdea18-10e4-4dd2-9432-d3431691849f - - - - - -] DHCP configuration for ports {'447763c2-e3ed-4d66-8374-184253dfa825'} is completed#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.023 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.024 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11621MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.025 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.025 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.117 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.118 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.140 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:04:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:20.402 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:04:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:20.403 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:04:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:20.403 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:04:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:04:20 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2864622246' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.689 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.698 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.724 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.777 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:04:20 localhost nova_compute[297130]: 2025-10-05 10:04:20.778 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:04:20 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:20.835 2 INFO neutron.agent.securitygroups_rpc [None req-fc2f7fbf-79da-4814-8711-a9ff97a2ad28 39f5838e84b5470ca86bd1fe4d24e208 3f4120f15a704a6bbf6e983fddbe14b0 - - default default] Security group member updated ['0f68a87a-4623-4f46-8cab-ade6cefe7174']#033[00m Oct 5 06:04:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:21 localhost nova_compute[297130]: 2025-10-05 10:04:21.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:22 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:22.470 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:18Z, description=, device_id=0eca3632-10b1-44ae-8606-2934e8ebaceb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=28ec8a8c-b686-462a-bfae-04815b199410, ip_allocation=immediate, mac_address=fa:16:3e:77:13:cc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:08Z, description=, dns_domain=, id=a83b73df-1192-4d7e-96ed-794e48f8f131, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1483918899-network, port_security_enabled=True, project_id=729ee41aeaa14c54bd5d5db84035013e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7516, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=972, status=ACTIVE, subnets=['e5d92ee5-214f-4ae7-aaa0-a4e204c46d87'], tags=[], tenant_id=729ee41aeaa14c54bd5d5db84035013e, updated_at=2025-10-05T10:04:09Z, vlan_transparent=None, network_id=a83b73df-1192-4d7e-96ed-794e48f8f131, port_security_enabled=False, project_id=729ee41aeaa14c54bd5d5db84035013e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1022, status=DOWN, tags=[], tenant_id=729ee41aeaa14c54bd5d5db84035013e, updated_at=2025-10-05T10:04:18Z on network a83b73df-1192-4d7e-96ed-794e48f8f131#033[00m Oct 5 06:04:22 localhost podman[326390]: 2025-10-05 10:04:22.714156948 +0000 UTC m=+0.062771196 container kill 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:04:22 localhost dnsmasq[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/addn_hosts - 1 addresses Oct 5 06:04:22 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/host Oct 5 06:04:22 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/opts Oct 5 06:04:22 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:22.731 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:18Z, description=, device_id=dd848696-1192-42dc-bdf5-98abbd49c528, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=447763c2-e3ed-4d66-8374-184253dfa825, ip_allocation=immediate, mac_address=fa:16:3e:42:54:aa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:04:08Z, description=, dns_domain=, id=3313983b-2e96-4a06-b17a-d6b215fac86b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-502275482-network, port_security_enabled=True, project_id=4003abd2530843a7a9d79f2692bf3d99, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47192, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=971, status=ACTIVE, subnets=['e10ca5b5-5795-4f3c-adb1-ecff31f292d1'], tags=[], tenant_id=4003abd2530843a7a9d79f2692bf3d99, updated_at=2025-10-05T10:04:09Z, vlan_transparent=None, network_id=3313983b-2e96-4a06-b17a-d6b215fac86b, port_security_enabled=False, project_id=4003abd2530843a7a9d79f2692bf3d99, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1023, status=DOWN, tags=[], tenant_id=4003abd2530843a7a9d79f2692bf3d99, updated_at=2025-10-05T10:04:18Z on network 3313983b-2e96-4a06-b17a-d6b215fac86b#033[00m Oct 5 06:04:22 localhost nova_compute[297130]: 2025-10-05 10:04:22.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:22 localhost systemd[1]: tmp-crun.0vctil.mount: Deactivated successfully. Oct 5 06:04:22 localhost dnsmasq[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/addn_hosts - 1 addresses Oct 5 06:04:22 localhost podman[326429]: 2025-10-05 10:04:22.982654108 +0000 UTC m=+0.070782626 container kill 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:04:22 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/host Oct 5 06:04:22 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/opts Oct 5 06:04:22 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:22.987 271653 INFO neutron.agent.dhcp.agent [None req-f941496d-a071-4417-9eeb-98a203189027 - - - - - -] DHCP configuration for ports {'28ec8a8c-b686-462a-bfae-04815b199410'} is completed#033[00m Oct 5 06:04:23 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:23.241 271653 INFO neutron.agent.dhcp.agent [None req-c7da9f3e-da69-4529-a690-880073b1b7bf - - - - - -] DHCP configuration for ports {'447763c2-e3ed-4d66-8374-184253dfa825'} is completed#033[00m Oct 5 06:04:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:24.218 2 INFO neutron.agent.securitygroups_rpc [req-803059a0-e8ae-4dd9-b37f-165f2f3b99c6 req-91605528-ec08-4e30-8480-afd66ad061b2 47b0a607769d444e821972981f90739d eff80b93002a40fda33dc9fbdac9814e - - default default] Security group rule updated ['e6485e38-61f1-4967-b607-1efac3e82095']#033[00m Oct 5 06:04:24 localhost nova_compute[297130]: 2025-10-05 10:04:24.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:04:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:04:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:04:25 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:04:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:04:25 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev ca5b4aa7-94b4-4597-8a08-c8fafb183744 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:04:25 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev ca5b4aa7-94b4-4597-8a08-c8fafb183744 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:04:25 localhost ceph-mgr[301363]: [progress INFO root] Completed event ca5b4aa7-94b4-4597-8a08-c8fafb183744 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:04:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:04:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:04:25 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:25.367 2 INFO neutron.agent.securitygroups_rpc [None req-3f129a0c-3040-422c-9549-909e704a7a54 07c064cb999141c9a1e10d6cd219806f 318dd9dd1a494c039b49e420f4b0eccb - - default default] Security group member updated ['58cfae10-a4b4-45dd-9a1d-adbcaacaf651']#033[00m Oct 5 06:04:25 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:25.444 2 INFO neutron.agent.securitygroups_rpc [req-12bf250b-5e4a-4d0a-bf15-cb3b3b550047 req-29dda7fe-696c-4e16-a348-1f3552936e79 47b0a607769d444e821972981f90739d eff80b93002a40fda33dc9fbdac9814e - - default default] Security group rule updated ['e6485e38-61f1-4967-b607-1efac3e82095']#033[00m Oct 5 06:04:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:25 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:04:25 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:04:26 localhost podman[248157]: time="2025-10-05T10:04:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:04:26 localhost podman[248157]: @ - - [05/Oct/2025:10:04:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149964 "" "Go-http-client/1.1" Oct 5 06:04:26 localhost podman[248157]: @ - - [05/Oct/2025:10:04:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20281 "" "Go-http-client/1.1" Oct 5 06:04:26 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:04:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:04:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:04:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:27 localhost nova_compute[297130]: 2025-10-05 10:04:27.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:28 localhost dnsmasq[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/addn_hosts - 0 addresses Oct 5 06:04:28 localhost podman[326554]: 2025-10-05 10:04:28.065726074 +0000 UTC m=+0.064723621 container kill 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:04:28 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/host Oct 5 06:04:28 localhost dnsmasq-dhcp[326055]: read /var/lib/neutron/dhcp/3313983b-2e96-4a06-b17a-d6b215fac86b/opts Oct 5 06:04:28 localhost nova_compute[297130]: 2025-10-05 10:04:28.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:28 localhost ovn_controller[157556]: 2025-10-05T10:04:28Z|00100|binding|INFO|Releasing lport 513833a6-34aa-4fb9-b56a-132fdb1b291b from this chassis (sb_readonly=0) Oct 5 06:04:28 localhost kernel: device tap513833a6-34 left promiscuous mode Oct 5 06:04:28 localhost ovn_controller[157556]: 2025-10-05T10:04:28Z|00101|binding|INFO|Setting lport 513833a6-34aa-4fb9-b56a-132fdb1b291b down in Southbound Oct 5 06:04:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:28.246 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-3313983b-2e96-4a06-b17a-d6b215fac86b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3313983b-2e96-4a06-b17a-d6b215fac86b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4003abd2530843a7a9d79f2692bf3d99', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b8d3209-c2ee-4c78-bb04-61e78cad9fba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=513833a6-34aa-4fb9-b56a-132fdb1b291b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:28.248 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 513833a6-34aa-4fb9-b56a-132fdb1b291b in datapath 3313983b-2e96-4a06-b17a-d6b215fac86b unbound from our chassis#033[00m Oct 5 06:04:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:28.254 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3313983b-2e96-4a06-b17a-d6b215fac86b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:04:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:28.255 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc9d48b-aa65-4bd4-bd3c-f96ac525f50e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:28 localhost nova_compute[297130]: 2025-10-05 10:04:28.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:28 localhost nova_compute[297130]: 2025-10-05 10:04:28.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:28 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:28.639 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:28Z, description=, device_id=7c3037fe-f4f3-47b1-b512-7a2a3832d0be, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c5177eec-8f0b-4e0c-9b07-6fd9af67014e, ip_allocation=immediate, mac_address=fa:16:3e:4c:c1:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1088, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:28Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:28 localhost podman[326594]: 2025-10-05 10:04:28.834832058 +0000 UTC m=+0.047783318 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:04:28 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 5 addresses Oct 5 06:04:28 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:28 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:28 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:28.971 2 INFO neutron.agent.securitygroups_rpc [None req-7923a666-f93e-4612-b396-936fcbeb38e1 07c064cb999141c9a1e10d6cd219806f 318dd9dd1a494c039b49e420f4b0eccb - - default default] Security group member updated ['58cfae10-a4b4-45dd-9a1d-adbcaacaf651']#033[00m Oct 5 06:04:29 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:29.122 271653 INFO neutron.agent.dhcp.agent [None req-56c28d0b-3d0c-4556-8956-56ec299ece55 - - - - - -] DHCP configuration for ports {'c5177eec-8f0b-4e0c-9b07-6fd9af67014e'} is completed#033[00m Oct 5 06:04:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:29 localhost nova_compute[297130]: 2025-10-05 10:04:29.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:30 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:04:30 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:30 localhost podman[326633]: 2025-10-05 10:04:30.240082319 +0000 UTC m=+0.064221785 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:30 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:30 localhost nova_compute[297130]: 2025-10-05 10:04:30.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:30 localhost podman[326669]: 2025-10-05 10:04:30.683864521 +0000 UTC m=+0.061827991 container kill 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:04:30 localhost dnsmasq[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/addn_hosts - 0 addresses Oct 5 06:04:30 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/host Oct 5 06:04:30 localhost dnsmasq-dhcp[326060]: read /var/lib/neutron/dhcp/a83b73df-1192-4d7e-96ed-794e48f8f131/opts Oct 5 06:04:30 localhost nova_compute[297130]: 2025-10-05 10:04:30.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:30 localhost ovn_controller[157556]: 2025-10-05T10:04:30Z|00102|binding|INFO|Releasing lport 90ac5484-a7ab-4f3a-ae39-413d08499a4c from this chassis (sb_readonly=0) Oct 5 06:04:30 localhost ovn_controller[157556]: 2025-10-05T10:04:30Z|00103|binding|INFO|Setting lport 90ac5484-a7ab-4f3a-ae39-413d08499a4c down in Southbound Oct 5 06:04:30 localhost kernel: device tap90ac5484-a7 left promiscuous mode Oct 5 06:04:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:30.899 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-a83b73df-1192-4d7e-96ed-794e48f8f131', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b73df-1192-4d7e-96ed-794e48f8f131', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '729ee41aeaa14c54bd5d5db84035013e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e61edcbe-2cf7-4227-9c15-1ada0dedc03c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=90ac5484-a7ab-4f3a-ae39-413d08499a4c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:04:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:30.901 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 90ac5484-a7ab-4f3a-ae39-413d08499a4c in datapath a83b73df-1192-4d7e-96ed-794e48f8f131 unbound from our chassis#033[00m Oct 5 06:04:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:30.904 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a83b73df-1192-4d7e-96ed-794e48f8f131, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:04:30 localhost nova_compute[297130]: 2025-10-05 10:04:30.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:30 localhost ovn_metadata_agent[163196]: 2025-10-05 10:04:30.905 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[005d6ab0-5c0c-4280-913f-7025d4716fef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:04:31 localhost podman[326707]: 2025-10-05 10:04:31.238096031 +0000 UTC m=+0.059362044 container kill 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:04:31 localhost dnsmasq[326055]: exiting on receipt of SIGTERM Oct 5 06:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:04:31 localhost systemd[1]: libpod-4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6.scope: Deactivated successfully. Oct 5 06:04:31 localhost podman[326722]: 2025-10-05 10:04:31.322882028 +0000 UTC m=+0.064421232 container died 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:04:31 localhost podman[326734]: 2025-10-05 10:04:31.384525823 +0000 UTC m=+0.111976262 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:04:31 localhost podman[326734]: 2025-10-05 10:04:31.394363402 +0000 UTC m=+0.121813851 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:04:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6-userdata-shm.mount: Deactivated successfully. Oct 5 06:04:31 localhost systemd[1]: var-lib-containers-storage-overlay-9a1c3494ee37c54aeb570624f9330640946a15f828238cc6652798c2b9bffa03-merged.mount: Deactivated successfully. Oct 5 06:04:31 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:04:31 localhost podman[326722]: 2025-10-05 10:04:31.429266626 +0000 UTC m=+0.170805790 container remove 4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3313983b-2e96-4a06-b17a-d6b215fac86b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:31 localhost systemd[1]: libpod-conmon-4690b9eb62f48eb618aa0d77ceee7efae99868c37b6d23eface2c0d2ecb73cc6.scope: Deactivated successfully. Oct 5 06:04:31 localhost podman[326732]: 2025-10-05 10:04:31.523852962 +0000 UTC m=+0.249209793 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:31 localhost podman[326732]: 2025-10-05 10:04:31.559663331 +0000 UTC m=+0.285020142 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 06:04:31 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:31 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:31 localhost podman[326799]: 2025-10-05 10:04:31.564021929 +0000 UTC m=+0.061117591 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:04:31 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:31 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:04:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:31 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:31.674 271653 INFO neutron.agent.dhcp.agent [None req-71590f04-2894-46d0-b18c-2290aaffca11 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:31 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:31.721 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:32 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:04:32 localhost podman[326842]: 2025-10-05 10:04:32.007097 +0000 UTC m=+0.059961959 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:04:32 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:32 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:32 localhost nova_compute[297130]: 2025-10-05 10:04:32.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:32 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:32.310 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:32 localhost systemd[1]: run-netns-qdhcp\x2d3313983b\x2d2e96\x2d4a06\x2db17a\x2dd6b215fac86b.mount: Deactivated successfully. Oct 5 06:04:32 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:32.800 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:32Z, description=, device_id=548e426c-c0c3-4496-9657-b0ea01880f11, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cd35dde8-de16-44a9-b9d8-5989b6a5f6c8, ip_allocation=immediate, mac_address=fa:16:3e:51:82:03, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1095, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:32Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:32 localhost nova_compute[297130]: 2025-10-05 10:04:32.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:33 localhost podman[326878]: 2025-10-05 10:04:33.046534463 +0000 UTC m=+0.046269905 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:04:33 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:33 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:33 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:04:33 localhost systemd[1]: tmp-crun.QKP63f.mount: Deactivated successfully. Oct 5 06:04:33 localhost podman[326892]: 2025-10-05 10:04:33.185091401 +0000 UTC m=+0.107334186 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter) Oct 5 06:04:33 localhost podman[326892]: 2025-10-05 10:04:33.197462929 +0000 UTC m=+0.119705724 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc.) Oct 5 06:04:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:04:33 localhost snmpd[68888]: empty variable list in _query Oct 5 06:04:33 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:04:33 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:33.288 271653 INFO neutron.agent.dhcp.agent [None req-832b0e4b-349f-4144-b01a-b7555bf08bbf - - - - - -] DHCP configuration for ports {'cd35dde8-de16-44a9-b9d8-5989b6a5f6c8'} is completed#033[00m Oct 5 06:04:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:33 localhost dnsmasq[326060]: exiting on receipt of SIGTERM Oct 5 06:04:33 localhost podman[326936]: 2025-10-05 10:04:33.816614203 +0000 UTC m=+0.059477476 container kill 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:04:33 localhost systemd[1]: libpod-13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71.scope: Deactivated successfully. Oct 5 06:04:33 localhost podman[326948]: 2025-10-05 10:04:33.889997199 +0000 UTC m=+0.059626270 container died 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71-userdata-shm.mount: Deactivated successfully. Oct 5 06:04:33 localhost podman[326948]: 2025-10-05 10:04:33.921065128 +0000 UTC m=+0.090694119 container cleanup 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:04:33 localhost systemd[1]: libpod-conmon-13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71.scope: Deactivated successfully. Oct 5 06:04:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:33 localhost podman[326950]: 2025-10-05 10:04:33.964079015 +0000 UTC m=+0.124967498 container remove 13b82dd21a19bae57ff4319492d7fc8784e6cedca563ff82e32d108b7c9f0c71 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83b73df-1192-4d7e-96ed-794e48f8f131, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:04:34 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:34.419 271653 INFO neutron.agent.dhcp.agent [None req-89fc5c6e-f1d2-4edb-be6a-2386f4522076 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:34 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:34.420 271653 INFO neutron.agent.dhcp.agent [None req-89fc5c6e-f1d2-4edb-be6a-2386f4522076 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:34 localhost nova_compute[297130]: 2025-10-05 10:04:34.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:34 localhost systemd[1]: var-lib-containers-storage-overlay-2531c07c5ac7da804f5b470861092946021c9b58242f9fcf654aeaec7be9b35b-merged.mount: Deactivated successfully. Oct 5 06:04:34 localhost systemd[1]: run-netns-qdhcp\x2da83b73df\x2d1192\x2d4d7e\x2d96ed\x2d794e48f8f131.mount: Deactivated successfully. Oct 5 06:04:34 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:34.947 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:04:34 localhost nova_compute[297130]: 2025-10-05 10:04:34.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:35 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:04:35 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:35 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:35 localhost podman[326992]: 2025-10-05 10:04:35.00105838 +0000 UTC m=+0.061405380 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:04:35 localhost nova_compute[297130]: 2025-10-05 10:04:35.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:37.164 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:36Z, description=, device_id=dc863aae-1152-4d59-9497-3daad7269b52, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=78ff22d4-f4da-4e2f-b89f-75440eab3298, ip_allocation=immediate, mac_address=fa:16:3e:10:fd:80, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1124, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:36Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:37 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:37 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:37 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:37 localhost podman[327028]: 2025-10-05 10:04:37.380304385 +0000 UTC m=+0.057119952 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:37.659 271653 INFO neutron.agent.dhcp.agent [None req-155c92b4-9e8d-4f5b-bfc9-7c96f27c8f8a - - - - - -] DHCP configuration for ports {'78ff22d4-f4da-4e2f-b89f-75440eab3298'} is completed#033[00m Oct 5 06:04:37 localhost nova_compute[297130]: 2025-10-05 10:04:37.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.883 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:04:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:04:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:39 localhost nova_compute[297130]: 2025-10-05 10:04:39.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:39 localhost nova_compute[297130]: 2025-10-05 10:04:39.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:04:39 localhost podman[327048]: 2025-10-05 10:04:39.907146937 +0000 UTC m=+0.075268429 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:39 localhost podman[327048]: 2025-10-05 10:04:39.941159317 +0000 UTC m=+0.109280819 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:04:39 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:04:40 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:04:40 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:40 localhost podman[327084]: 2025-10-05 10:04:40.404528023 +0000 UTC m=+0.061284566 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:40 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:40 localhost nova_compute[297130]: 2025-10-05 10:04:40.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:04:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:04:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:04:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:04:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:04:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:04:41 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:04:41 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:41 localhost podman[327122]: 2025-10-05 10:04:41.956674171 +0000 UTC m=+0.062224532 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:04:41 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:41 localhost systemd[1]: tmp-crun.TTSY9X.mount: Deactivated successfully. Oct 5 06:04:42 localhost nova_compute[297130]: 2025-10-05 10:04:42.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:42 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:42.604 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:42Z, description=, device_id=df4306e6-1ae2-4955-96e2-4974ee1b183d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ef19eb72-1a7f-4e6c-bbca-e895261b510f, ip_allocation=immediate, mac_address=fa:16:3e:7e:22:21, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1155, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:42Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:42 localhost podman[327157]: 2025-10-05 10:04:42.805415511 +0000 UTC m=+0.048774995 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:04:42 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:04:42 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:42 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:42 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:42.942 271653 INFO neutron.agent.dhcp.agent [None req-04948aa3-b7e6-4b58-bb18-b74c1ecb3128 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:42Z, description=, device_id=d3e09502-0d8d-4738-8edf-a4cb5c390ffd, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=58f4fca9-16f3-4455-b425-62b6d9528822, ip_allocation=immediate, mac_address=fa:16:3e:d9:2f:f6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1154, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:42Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:43 localhost nova_compute[297130]: 2025-10-05 10:04:43.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:43.027 271653 INFO neutron.agent.dhcp.agent [None req-99d8fbba-bd01-45b2-b5fb-221762c9be9b - - - - - -] DHCP configuration for ports {'ef19eb72-1a7f-4e6c-bbca-e895261b510f'} is completed#033[00m Oct 5 06:04:43 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:43 localhost podman[327194]: 2025-10-05 10:04:43.184446372 +0000 UTC m=+0.059870408 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:04:43 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:43 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:43 localhost systemd[1]: tmp-crun.RgK4tW.mount: Deactivated successfully. Oct 5 06:04:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:43.379 271653 INFO neutron.agent.dhcp.agent [None req-55e05127-2e63-4922-bfd6-f6bfbfba075f - - - - - -] DHCP configuration for ports {'58f4fca9-16f3-4455-b425-62b6d9528822'} is completed#033[00m Oct 5 06:04:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:44.424 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:43Z, description=, device_id=56378fdb-860f-427b-9037-81142d8a1f5f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0cdead38-5a85-441d-8fa0-ca3021b5c06e, ip_allocation=immediate, mac_address=fa:16:3e:61:6d:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1171, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:43Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:44 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:04:44 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:44 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:44 localhost podman[327232]: 2025-10-05 10:04:44.615393778 +0000 UTC m=+0.058953123 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:04:44 localhost nova_compute[297130]: 2025-10-05 10:04:44.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:44.866 271653 INFO neutron.agent.dhcp.agent [None req-8f9016c4-e9e8-467b-b70f-ecefadc54f04 - - - - - -] DHCP configuration for ports {'0cdead38-5a85-441d-8fa0-ca3021b5c06e'} is completed#033[00m Oct 5 06:04:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:45.722 2 INFO neutron.agent.securitygroups_rpc [None req-29641d0e-a015-45b4-a935-7a2349b946b8 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:04:45 localhost nova_compute[297130]: 2025-10-05 10:04:45.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:45 localhost podman[327256]: 2025-10-05 10:04:45.908267178 +0000 UTC m=+0.071779342 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:04:45 localhost podman[327255]: 2025-10-05 10:04:45.917956533 +0000 UTC m=+0.080450900 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:04:45 localhost podman[327256]: 2025-10-05 10:04:45.923754362 +0000 UTC m=+0.087266486 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:04:45 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:04:45 localhost podman[327255]: 2025-10-05 10:04:45.979050684 +0000 UTC m=+0.141545051 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:04:45 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:04:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:46.252 2 INFO neutron.agent.securitygroups_rpc [None req-8f518baf-0a73-42d4-8334-fb31cc8ac4e7 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:46 localhost openstack_network_exporter[250246]: ERROR 10:04:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:46 localhost openstack_network_exporter[250246]: ERROR 10:04:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:04:46 localhost openstack_network_exporter[250246]: ERROR 10:04:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:04:46 localhost openstack_network_exporter[250246]: ERROR 10:04:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:04:46 localhost openstack_network_exporter[250246]: Oct 5 06:04:46 localhost openstack_network_exporter[250246]: ERROR 10:04:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:04:46 localhost openstack_network_exporter[250246]: Oct 5 06:04:47 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:47.137 2 INFO neutron.agent.securitygroups_rpc [None req-287a8dc9-b560-471a-93c2-acb6de35de57 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:48 localhost nova_compute[297130]: 2025-10-05 10:04:48.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:49 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:49.085 2 INFO neutron.agent.securitygroups_rpc [None req-097980e1-eded-469f-8d31-7bb2108118fd b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:49 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:49 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:49 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:49 localhost podman[327310]: 2025-10-05 10:04:49.468507957 +0000 UTC m=+0.051819627 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:04:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:49.597 271653 INFO neutron.agent.dhcp.agent [None req-45c665bd-6652-46b7-80dc-2ca3fc4970de - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:48Z, description=, device_id=5cbeff01-190b-40ac-a0bb-d01d09f96b2b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0869e89f-2fc5-42fc-9f68-c31101a8dc2c, ip_allocation=immediate, mac_address=fa:16:3e:a4:49:fc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1204, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:48Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e108 e108: 6 total, 6 up, 6 in Oct 5 06:04:49 localhost nova_compute[297130]: 2025-10-05 10:04:49.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:49 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:04:49 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:49 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:49 localhost podman[327348]: 2025-10-05 10:04:49.805944901 +0000 UTC m=+0.067723242 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:04:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:04:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:04:49 localhost systemd[1]: tmp-crun.Cxc1ze.mount: Deactivated successfully. Oct 5 06:04:49 localhost podman[327364]: 2025-10-05 10:04:49.916984936 +0000 UTC m=+0.084988844 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:04:49 localhost podman[327364]: 2025-10-05 10:04:49.960305751 +0000 UTC m=+0.128309689 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:04:49 localhost podman[327363]: 2025-10-05 10:04:49.967342023 +0000 UTC m=+0.138518487 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:04:49 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:04:50 localhost podman[327363]: 2025-10-05 10:04:50.00416568 +0000 UTC m=+0.175342114 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0) Oct 5 06:04:50 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:04:50 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:50.051 271653 INFO neutron.agent.dhcp.agent [None req-dc1bffb8-d6a5-44d6-ab36-9f0c2bbacadd - - - - - -] DHCP configuration for ports {'0869e89f-2fc5-42fc-9f68-c31101a8dc2c'} is completed#033[00m Oct 5 06:04:50 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e109 e109: 6 total, 6 up, 6 in Oct 5 06:04:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:04:51 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:51.808 2 INFO neutron.agent.securitygroups_rpc [None req-48ed4e42-a053-4e1f-8ebc-0e6db992bfed b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:53 localhost nova_compute[297130]: 2025-10-05 10:04:53.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:53 localhost nova_compute[297130]: 2025-10-05 10:04:53.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Oct 5 06:04:53 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:53.930 2 INFO neutron.agent.securitygroups_rpc [None req-3621eae7-9b8a-4d62-ad7a-61d9e831b33d b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:54 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:04:54 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:54 localhost podman[327432]: 2025-10-05 10:04:54.146157921 +0000 UTC m=+0.058979773 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:04:54 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:54.282 271653 INFO neutron.agent.dhcp.agent [None req-929a07d8-3a5e-433d-b316-65fff52fb867 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:52Z, description=, device_id=cbb4de02-077a-43de-ab06-ddb069ed7522, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fe7568b1-1817-49f8-aa37-e14efefa056c, ip_allocation=immediate, mac_address=fa:16:3e:2a:2d:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1222, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:53Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:54 localhost systemd[1]: tmp-crun.vTpLbq.mount: Deactivated successfully. Oct 5 06:04:54 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:04:54 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:54 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:54 localhost podman[327469]: 2025-10-05 10:04:54.489924348 +0000 UTC m=+0.064115613 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:04:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:54.711 271653 INFO neutron.agent.dhcp.agent [None req-455dc992-590a-40e8-bc29-00857b16b4fe - - - - - -] DHCP configuration for ports {'fe7568b1-1817-49f8-aa37-e14efefa056c'} is completed#033[00m Oct 5 06:04:54 localhost nova_compute[297130]: 2025-10-05 10:04:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:54.948 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:04:54Z, description=, device_id=35e9991f-3534-49b3-b850-f8e1ed668869, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b87053f9-82ef-47d3-a013-51c3ba9ee7f6, ip_allocation=immediate, mac_address=fa:16:3e:31:b6:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1227, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:04:54Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:04:55 localhost systemd[1]: tmp-crun.MrfAR4.mount: Deactivated successfully. Oct 5 06:04:55 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 5 addresses Oct 5 06:04:55 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:04:55 localhost podman[327507]: 2025-10-05 10:04:55.193432528 +0000 UTC m=+0.061922513 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:04:55 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:04:55 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:04:55.456 271653 INFO neutron.agent.dhcp.agent [None req-daf0e54a-13c9-4e20-baf6-56835d463015 - - - - - -] DHCP configuration for ports {'b87053f9-82ef-47d3-a013-51c3ba9ee7f6'} is completed#033[00m Oct 5 06:04:55 localhost nova_compute[297130]: 2025-10-05 10:04:55.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Oct 5 06:04:56 localhost podman[248157]: time="2025-10-05T10:04:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:04:56 localhost podman[248157]: @ - - [05/Oct/2025:10:04:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:04:56 localhost podman[248157]: @ - - [05/Oct/2025:10:04:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19330 "" "Go-http-client/1.1" Oct 5 06:04:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e110 e110: 6 total, 6 up, 6 in Oct 5 06:04:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 5.7 KiB/s wr, 62 op/s Oct 5 06:04:57 localhost nova_compute[297130]: 2025-10-05 10:04:57.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:57 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:57.800 2 INFO neutron.agent.securitygroups_rpc [None req-e6701a41-37af-4a2b-ae16-76078df6bfd1 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:57 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e111 e111: 6 total, 6 up, 6 in Oct 5 06:04:58 localhost nova_compute[297130]: 2025-10-05 10:04:58.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:58 localhost nova_compute[297130]: 2025-10-05 10:04:58.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e112 e112: 6 total, 6 up, 6 in Oct 5 06:04:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:04:59 localhost neutron_sriov_agent[264647]: 2025-10-05 10:04:59.410 2 INFO neutron.agent.securitygroups_rpc [None req-311e8aa1-b786-4d5b-a1f1-c8361acde964 b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:04:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail; 9.5 KiB/s rd, 2.3 KiB/s wr, 14 op/s Oct 5 06:04:59 localhost nova_compute[297130]: 2025-10-05 10:04:59.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:04:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e113 e113: 6 total, 6 up, 6 in Oct 5 06:05:00 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:05:00 localhost podman[327545]: 2025-10-05 10:05:00.249399293 +0000 UTC m=+0.063507198 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:00 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:00 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:05:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 7705 writes, 32K keys, 7705 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 7705 writes, 1846 syncs, 4.17 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2632 writes, 9807 keys, 2632 commit groups, 1.0 writes per commit group, ingest: 9.93 MB, 0.02 MB/s#012Interval WAL: 2632 writes, 1105 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:05:00 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e114 e114: 6 total, 6 up, 6 in Oct 5 06:05:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 754 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:01 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:01.786 2 INFO neutron.agent.securitygroups_rpc [None req-3404f74c-a45d-4b5e-8fd8-426d7f6d729d b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:05:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:05:01 localhost systemd[1]: tmp-crun.aMzDeK.mount: Deactivated successfully. Oct 5 06:05:01 localhost podman[327566]: 2025-10-05 10:05:01.935941694 +0000 UTC m=+0.103605573 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:05:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e115 e115: 6 total, 6 up, 6 in Oct 5 06:05:01 localhost podman[327566]: 2025-10-05 10:05:01.976178854 +0000 UTC m=+0.143842733 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:05:01 localhost systemd[1]: tmp-crun.5vn1b6.mount: Deactivated successfully. Oct 5 06:05:01 localhost podman[327567]: 2025-10-05 10:05:01.984176703 +0000 UTC m=+0.147612837 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:05:01 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:05:02 localhost podman[327567]: 2025-10-05 10:05:02.018222983 +0000 UTC m=+0.181659077 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:05:02 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:05:02 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e116 e116: 6 total, 6 up, 6 in Oct 5 06:05:03 localhost nova_compute[297130]: 2025-10-05 10:05:03.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 133 KiB/s rd, 22 KiB/s wr, 191 op/s Oct 5 06:05:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:05:03 localhost podman[327605]: 2025-10-05 10:05:03.91603104 +0000 UTC m=+0.082066265 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git) Oct 5 06:05:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:03 localhost podman[327605]: 2025-10-05 10:05:03.954607954 +0000 UTC m=+0.120643139 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:05:03 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:05:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:03.992 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:03Z, description=, device_id=63bc9f96-5ddd-44d0-8837-65338b4bb960, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=23279492-b7ff-4213-baa0-6e97c6febfcb, ip_allocation=immediate, mac_address=fa:16:3e:ad:cd:a9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1250, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:03Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:04 localhost nova_compute[297130]: 2025-10-05 10:05:04.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:04 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 5 addresses Oct 5 06:05:04 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:04 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:04 localhost podman[327643]: 2025-10-05 10:05:04.284507382 +0000 UTC m=+0.058606263 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:04.471 271653 INFO neutron.agent.dhcp.agent [None req-ed185e76-034a-4dd4-9b63-572d0c75b8e8 - - - - - -] DHCP configuration for ports {'23279492-b7ff-4213-baa0-6e97c6febfcb'} is completed#033[00m Oct 5 06:05:04 localhost nova_compute[297130]: 2025-10-05 10:05:04.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:04 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:05:04 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:04 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:04 localhost podman[327680]: 2025-10-05 10:05:04.714977669 +0000 UTC m=+0.067645020 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:04 localhost nova_compute[297130]: 2025-10-05 10:05:04.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e117 e117: 6 total, 6 up, 6 in Oct 5 06:05:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:05:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 9175 writes, 37K keys, 9175 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9175 writes, 2196 syncs, 4.18 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3413 writes, 12K keys, 3413 commit groups, 1.0 writes per commit group, ingest: 14.62 MB, 0.02 MB/s#012Interval WAL: 3413 writes, 1423 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:05:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:05.518 2 INFO neutron.agent.securitygroups_rpc [None req-96ba4175-465c-43e2-a318-1d55fb6b05eb b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 114 KiB/s rd, 19 KiB/s wr, 163 op/s Oct 5 06:05:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e118 e118: 6 total, 6 up, 6 in Oct 5 06:05:06 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:05:06 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:06 localhost podman[327717]: 2025-10-05 10:05:06.890069145 +0000 UTC m=+0.058906411 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:06 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:06 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:06.899 2 INFO neutron.agent.securitygroups_rpc [None req-c6bd9fde-6da0-4c32-a4a7-f28125de346f b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:07 localhost nova_compute[297130]: 2025-10-05 10:05:07.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e119 e119: 6 total, 6 up, 6 in Oct 5 06:05:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 8.4 KiB/s wr, 136 op/s Oct 5 06:05:08 localhost nova_compute[297130]: 2025-10-05 10:05:08.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:08 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:08.238 2 INFO neutron.agent.securitygroups_rpc [None req-8902c5d7-9a45-4249-96d1-8b434e4645ef b03fcdb187d0440aa0b9048a2de09675 93aad94041fb432287bb3adb92af45a9 - - default default] Security group member updated ['fe68491c-5ef6-4bdc-aa9d-7d02dc4369c1']#033[00m Oct 5 06:05:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e120 e120: 6 total, 6 up, 6 in Oct 5 06:05:08 localhost systemd[1]: tmp-crun.D1BKdM.mount: Deactivated successfully. Oct 5 06:05:08 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:05:08 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:08 localhost podman[327754]: 2025-10-05 10:05:08.944372679 +0000 UTC m=+0.078954530 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:05:08 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:08 localhost nova_compute[297130]: 2025-10-05 10:05:08.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:08 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:08.986 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:08 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:08.988 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:05:09 localhost nova_compute[297130]: 2025-10-05 10:05:09.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:09 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:05:09 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:09 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:09 localhost podman[327793]: 2025-10-05 10:05:09.336310513 +0000 UTC m=+0.072180205 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:05:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 8.5 KiB/s wr, 137 op/s Oct 5 06:05:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e121 e121: 6 total, 6 up, 6 in Oct 5 06:05:09 localhost nova_compute[297130]: 2025-10-05 10:05:09.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:10 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:10.283 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:10Z, description=, device_id=9addf042-8c35-4f90-817c-a74a20711f96, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=26ce0a28-b5e4-4b06-93d9-cd9774b8c914, ip_allocation=immediate, mac_address=fa:16:3e:e6:58:f4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1264, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:10Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:10 localhost systemd[1]: tmp-crun.wyrffu.mount: Deactivated successfully. Oct 5 06:05:10 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:05:10 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:10 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:10 localhost podman[327830]: 2025-10-05 10:05:10.524909272 +0000 UTC m=+0.076091401 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:05:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:05:10 localhost podman[327846]: 2025-10-05 10:05:10.645622831 +0000 UTC m=+0.081678283 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:05:10 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e122 e122: 6 total, 6 up, 6 in Oct 5 06:05:10 localhost podman[327846]: 2025-10-05 10:05:10.678302535 +0000 UTC m=+0.114358007 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:05:10 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:05:10 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:10.789 271653 INFO neutron.agent.dhcp.agent [None req-421e7b97-4e99-4e22-9ca4-f8e1374ee1cf - - - - - -] DHCP configuration for ports {'26ce0a28-b5e4-4b06-93d9-cd9774b8c914'} is completed#033[00m Oct 5 06:05:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:05:11 Oct 5 06:05:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:05:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:05:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'manila_data', '.mgr', 'manila_metadata', 'backups'] Oct 5 06:05:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:05:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e123 e123: 6 total, 6 up, 6 in Oct 5 06:05:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:05:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:05:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:05:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:05:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004303472658431044 of space, bias 1.0, pg target 0.8592600408000651 quantized to 32 (current 32) Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:05:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:05:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:05:12 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:05:12 localhost podman[327884]: 2025-10-05 10:05:12.405726963 +0000 UTC m=+0.058406298 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:05:12 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:12 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:12 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:12.990 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:13.192 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:12Z, description=, device_id=53b18ce8-1508-4be8-8576-a3b57abab7f4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=72d543dd-97d5-450a-8bc3-a729a8f19e17, ip_allocation=immediate, mac_address=fa:16:3e:c8:6b:01, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1281, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:12Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:13 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:13.236 271653 INFO neutron.agent.linux.ip_lib [None req-170cb75b-9776-4a94-8a2b-69e8267b7145 - - - - - -] Device tap6bb15cab-c2 cannot be used as it has no MAC address#033[00m Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost kernel: device tap6bb15cab-c2 entered promiscuous mode Oct 5 06:05:13 localhost NetworkManager[5970]: [1759658713.2746] manager: (tap6bb15cab-c2): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost ovn_controller[157556]: 2025-10-05T10:05:13Z|00104|binding|INFO|Claiming lport 6bb15cab-c202-43a8-9910-1f6cd203e267 for this chassis. Oct 5 06:05:13 localhost ovn_controller[157556]: 2025-10-05T10:05:13Z|00105|binding|INFO|6bb15cab-c202-43a8-9910-1f6cd203e267: Claiming unknown Oct 5 06:05:13 localhost systemd-udevd[327918]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:13 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:13.287 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-d049dc2f-9ad1-442f-a7ed-40e48a021c99', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d049dc2f-9ad1-442f-a7ed-40e48a021c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed44175d-bb2f-4794-9594-5b77ddc6ee0f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6bb15cab-c202-43a8-9910-1f6cd203e267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:13 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:13.290 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 6bb15cab-c202-43a8-9910-1f6cd203e267 in datapath d049dc2f-9ad1-442f-a7ed-40e48a021c99 bound to our chassis#033[00m Oct 5 06:05:13 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:13.296 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d049dc2f-9ad1-442f-a7ed-40e48a021c99 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:13 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:13.297 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[1c475423-b108-4815-a2a0-d114884afff6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:13 localhost ovn_controller[157556]: 2025-10-05T10:05:13Z|00106|binding|INFO|Setting lport 6bb15cab-c202-43a8-9910-1f6cd203e267 ovn-installed in OVS Oct 5 06:05:13 localhost ovn_controller[157556]: 2025-10-05T10:05:13Z|00107|binding|INFO|Setting lport 6bb15cab-c202-43a8-9910-1f6cd203e267 up in Southbound Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost nova_compute[297130]: 2025-10-05 10:05:13.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:13 localhost systemd[1]: tmp-crun.kkl44z.mount: Deactivated successfully. Oct 5 06:05:13 localhost podman[327939]: 2025-10-05 10:05:13.428420228 +0000 UTC m=+0.074397914 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:05:13 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:05:13 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:13 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 132 KiB/s rd, 10 KiB/s wr, 181 op/s Oct 5 06:05:13 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:13.757 271653 INFO neutron.agent.dhcp.agent [None req-d3479c61-8ff4-4539-b7ad-56a61417fc46 - - - - - -] DHCP configuration for ports {'72d543dd-97d5-450a-8bc3-a729a8f19e17'} is completed#033[00m Oct 5 06:05:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:14 localhost podman[328007]: Oct 5 06:05:14 localhost podman[328007]: 2025-10-05 10:05:14.230022049 +0000 UTC m=+0.090899065 container create 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:05:14 localhost systemd[1]: Started libpod-conmon-89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af.scope. Oct 5 06:05:14 localhost podman[328007]: 2025-10-05 10:05:14.185019759 +0000 UTC m=+0.045896815 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:14 localhost systemd[1]: Started libcrun container. Oct 5 06:05:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e3db0c66094c6dd0cb4492dfab69bdee9c9f17d019adc089483e19714ea963/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:14 localhost podman[328007]: 2025-10-05 10:05:14.304941428 +0000 UTC m=+0.165818444 container init 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:05:14 localhost podman[328007]: 2025-10-05 10:05:14.313974214 +0000 UTC m=+0.174851220 container start 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:05:14 localhost dnsmasq[328025]: started, version 2.85 cachesize 150 Oct 5 06:05:14 localhost dnsmasq[328025]: DNS service limited to local subnets Oct 5 06:05:14 localhost dnsmasq[328025]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:14 localhost dnsmasq[328025]: warning: no upstream servers configured Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Oct 5 06:05:14 localhost dnsmasq[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/addn_hosts - 0 addresses Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/host Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/opts Oct 5 06:05:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:14.378 271653 INFO neutron.agent.dhcp.agent [None req-170cb75b-9776-4a94-8a2b-69e8267b7145 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:13Z, description=, device_id=94485d07-9727-4c93-a258-4dd243f7b5fb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1b78f9ab-eb15-42f5-a9a0-f2f9de6afa37, ip_allocation=immediate, mac_address=fa:16:3e:c6:94:71, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:11Z, description=, dns_domain=, id=d049dc2f-9ad1-442f-a7ed-40e48a021c99, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-468555151, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1270, status=ACTIVE, subnets=['97de6d44-3b72-479b-8f4d-593b2ead7e5a'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:12Z, vlan_transparent=None, network_id=d049dc2f-9ad1-442f-a7ed-40e48a021c99, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1282, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:13Z on network d049dc2f-9ad1-442f-a7ed-40e48a021c99#033[00m Oct 5 06:05:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:14.531 271653 INFO neutron.agent.dhcp.agent [None req-e3f4ebce-305b-4046-96d3-d5408a7b5655 - - - - - -] DHCP configuration for ports {'37341402-3045-4d4f-ab54-49aafc5fd520'} is completed#033[00m Oct 5 06:05:14 localhost dnsmasq[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/addn_hosts - 1 addresses Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/host Oct 5 06:05:14 localhost podman[328045]: 2025-10-05 10:05:14.574994 +0000 UTC m=+0.060238138 container kill 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/opts Oct 5 06:05:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:14.768 271653 INFO neutron.agent.dhcp.agent [None req-9c1895ad-5e96-4f77-964a-51bbfc25ecc4 - - - - - -] DHCP configuration for ports {'1b78f9ab-eb15-42f5-a9a0-f2f9de6afa37'} is completed#033[00m Oct 5 06:05:14 localhost nova_compute[297130]: 2025-10-05 10:05:14.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:14 localhost nova_compute[297130]: 2025-10-05 10:05:14.778 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:14 localhost nova_compute[297130]: 2025-10-05 10:05:14.779 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:14.792 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:13Z, description=, device_id=94485d07-9727-4c93-a258-4dd243f7b5fb, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1b78f9ab-eb15-42f5-a9a0-f2f9de6afa37, ip_allocation=immediate, mac_address=fa:16:3e:c6:94:71, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:11Z, description=, dns_domain=, id=d049dc2f-9ad1-442f-a7ed-40e48a021c99, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-468555151, port_security_enabled=True, project_id=318dd9dd1a494c039b49e420f4b0eccb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1270, status=ACTIVE, subnets=['97de6d44-3b72-479b-8f4d-593b2ead7e5a'], tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:12Z, vlan_transparent=None, network_id=d049dc2f-9ad1-442f-a7ed-40e48a021c99, port_security_enabled=False, project_id=318dd9dd1a494c039b49e420f4b0eccb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1282, status=DOWN, tags=[], tenant_id=318dd9dd1a494c039b49e420f4b0eccb, updated_at=2025-10-05T10:05:13Z on network d049dc2f-9ad1-442f-a7ed-40e48a021c99#033[00m Oct 5 06:05:14 localhost nova_compute[297130]: 2025-10-05 10:05:14.797 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:14 localhost nova_compute[297130]: 2025-10-05 10:05:14.797 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:14 localhost nova_compute[297130]: 2025-10-05 10:05:14.798 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:14 localhost systemd[1]: tmp-crun.18rfdc.mount: Deactivated successfully. Oct 5 06:05:14 localhost dnsmasq[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/addn_hosts - 1 addresses Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/host Oct 5 06:05:14 localhost podman[328082]: 2025-10-05 10:05:14.984919215 +0000 UTC m=+0.061076061 container kill 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:05:14 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/opts Oct 5 06:05:15 localhost nova_compute[297130]: 2025-10-05 10:05:15.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:15.223 271653 INFO neutron.agent.dhcp.agent [None req-d5aea651-20f6-4a95-8d9c-90227bcc6d4b - - - - - -] DHCP configuration for ports {'1b78f9ab-eb15-42f5-a9a0-f2f9de6afa37'} is completed#033[00m Oct 5 06:05:15 localhost nova_compute[297130]: 2025-10-05 10:05:15.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:15 localhost nova_compute[297130]: 2025-10-05 10:05:15.275 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:05:15 localhost nova_compute[297130]: 2025-10-05 10:05:15.275 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:05:15 localhost nova_compute[297130]: 2025-10-05 10:05:15.345 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:05:15 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:05:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:15 localhost podman[328121]: 2025-10-05 10:05:15.548847729 +0000 UTC m=+0.058668054 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:05:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 8.3 KiB/s wr, 151 op/s Oct 5 06:05:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 e124: 6 total, 6 up, 6 in Oct 5 06:05:16 localhost openstack_network_exporter[250246]: ERROR 10:05:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:16 localhost openstack_network_exporter[250246]: ERROR 10:05:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:05:16 localhost openstack_network_exporter[250246]: ERROR 10:05:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:05:16 localhost openstack_network_exporter[250246]: Oct 5 06:05:16 localhost openstack_network_exporter[250246]: ERROR 10:05:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:05:16 localhost openstack_network_exporter[250246]: Oct 5 06:05:16 localhost openstack_network_exporter[250246]: ERROR 10:05:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:05:16 localhost systemd[1]: tmp-crun.B0vFqA.mount: Deactivated successfully. Oct 5 06:05:16 localhost podman[328141]: 2025-10-05 10:05:16.932374519 +0000 UTC m=+0.093636301 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:05:16 localhost podman[328141]: 2025-10-05 10:05:16.968705582 +0000 UTC m=+0.129967384 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:05:16 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:05:17 localhost podman[328142]: 2025-10-05 10:05:16.972598498 +0000 UTC m=+0.130151989 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:05:17 localhost podman[328142]: 2025-10-05 10:05:17.054327253 +0000 UTC m=+0.211880734 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:05:17 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:05:17 localhost nova_compute[297130]: 2025-10-05 10:05:17.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 95 KiB/s rd, 7.2 KiB/s wr, 130 op/s Oct 5 06:05:17 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:17.887 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:17Z, description=, device_id=e959734f-af7c-47ce-a76d-87ea38f4190b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0f0280c4-8964-495f-b8f2-3ebefbb08d09, ip_allocation=immediate, mac_address=fa:16:3e:ce:ac:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1313, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:17Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:18 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:05:18 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:18 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:18 localhost podman[328200]: 2025-10-05 10:05:18.09504814 +0000 UTC m=+0.059473636 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:18 localhost nova_compute[297130]: 2025-10-05 10:05:18.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:18 localhost nova_compute[297130]: 2025-10-05 10:05:18.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:18 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Oct 5 06:05:18 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:18.422 271653 INFO neutron.agent.dhcp.agent [None req-de50c9cd-21bc-4e93-9794-3ca0425da3cb - - - - - -] DHCP configuration for ports {'0f0280c4-8964-495f-b8f2-3ebefbb08d09'} is completed#033[00m Oct 5 06:05:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.294 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.294 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.295 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.295 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:05:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 6.2 KiB/s wr, 113 op/s Oct 5 06:05:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:05:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2173223242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.709 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.943 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.946 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11600MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.946 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.947 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:05:19 localhost nova_compute[297130]: 2025-10-05 10:05:19.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.030 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.030 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.057 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:05:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:20.403 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:05:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:20.404 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:05:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:20.405 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:05:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:05:20 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3697723825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.516 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.521 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.542 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.543 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:05:20 localhost nova_compute[297130]: 2025-10-05 10:05:20.543 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:05:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:05:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:05:20 localhost podman[328268]: 2025-10-05 10:05:20.915068845 +0000 UTC m=+0.071869126 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 06:05:20 localhost systemd[1]: tmp-crun.A6DWC6.mount: Deactivated successfully. Oct 5 06:05:20 localhost podman[328267]: 2025-10-05 10:05:20.98405207 +0000 UTC m=+0.142322641 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:05:20 localhost podman[328267]: 2025-10-05 10:05:20.996152982 +0000 UTC m=+0.154423543 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:05:21 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:05:21 localhost podman[328268]: 2025-10-05 10:05:21.052570264 +0000 UTC m=+0.209370565 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:21 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:05:21 localhost nova_compute[297130]: 2025-10-05 10:05:21.544 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:05:21 localhost nova_compute[297130]: 2025-10-05 10:05:21.544 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:05:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 5.0 KiB/s wr, 91 op/s Oct 5 06:05:22 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:22.670 2 INFO neutron.agent.securitygroups_rpc [None req-93fe7840-dc69-45ff-b50f-36a4dfb68a10 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:23 localhost nova_compute[297130]: 2025-10-05 10:05:23.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:23 localhost nova_compute[297130]: 2025-10-05 10:05:23.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:23 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:23.346 2 INFO neutron.agent.securitygroups_rpc [None req-1d706833-c05a-42ba-83e6-877058a56cc7 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:24 localhost nova_compute[297130]: 2025-10-05 10:05:24.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:26 localhost podman[248157]: time="2025-10-05T10:05:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:05:26 localhost podman[248157]: @ - - [05/Oct/2025:10:05:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148135 "" "Go-http-client/1.1" Oct 5 06:05:26 localhost podman[248157]: @ - - [05/Oct/2025:10:05:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19796 "" "Go-http-client/1.1" Oct 5 06:05:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:05:26 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:05:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:05:26 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:05:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:05:26 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 97157335-d206-4dd9-a9e8-0ccdd8721d5d (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:05:26 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 97157335-d206-4dd9-a9e8-0ccdd8721d5d (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:05:26 localhost ceph-mgr[301363]: [progress INFO root] Completed event 97157335-d206-4dd9-a9e8-0ccdd8721d5d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:05:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:05:26 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:05:26 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:05:26 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:05:26 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:05:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:05:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:05:27 localhost nova_compute[297130]: 2025-10-05 10:05:27.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:28 localhost nova_compute[297130]: 2025-10-05 10:05:28.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:28 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:28.664 2 INFO neutron.agent.securitygroups_rpc [None req-d703c398-6b6b-4943-948d-cb3bb0e9aa0d 1923ea4457da447faeaeab6caeaa2432 0e9cb8e52fb8423a938253c02c2bf4e9 - - default default] Security group member updated ['0ca2311d-6d3d-404c-89d9-0a8f70a83790']#033[00m Oct 5 06:05:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:29 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:29.423 2 INFO neutron.agent.securitygroups_rpc [None req-8eb6dbc3-4698-4d96-8d9f-a1a17a2550fa 1923ea4457da447faeaeab6caeaa2432 0e9cb8e52fb8423a938253c02c2bf4e9 - - default default] Security group member updated ['0ca2311d-6d3d-404c-89d9-0a8f70a83790']#033[00m Oct 5 06:05:29 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:29.446 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:29 localhost nova_compute[297130]: 2025-10-05 10:05:29.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:31 localhost nova_compute[297130]: 2025-10-05 10:05:31.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:32.234 2 INFO neutron.agent.securitygroups_rpc [None req-340b3254-226d-430f-95be-e66d88dfe216 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:05:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:05:32 localhost podman[328400]: 2025-10-05 10:05:32.936272083 +0000 UTC m=+0.098984656 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute) Oct 5 06:05:32 localhost podman[328401]: 2025-10-05 10:05:32.992723446 +0000 UTC m=+0.155109851 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:05:33 localhost podman[328400]: 2025-10-05 10:05:33.004486567 +0000 UTC m=+0.167199130 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0) Oct 5 06:05:33 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:05:33 localhost podman[328401]: 2025-10-05 10:05:33.0323884 +0000 UTC m=+0.194774795 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:05:33 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:05:33 localhost nova_compute[297130]: 2025-10-05 10:05:33.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:33 localhost dnsmasq[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/addn_hosts - 0 addresses Oct 5 06:05:33 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/host Oct 5 06:05:33 localhost podman[328462]: 2025-10-05 10:05:33.626281734 +0000 UTC m=+0.067919057 container kill 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:05:33 localhost dnsmasq-dhcp[328025]: read /var/lib/neutron/dhcp/d049dc2f-9ad1-442f-a7ed-40e48a021c99/opts Oct 5 06:05:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:33 localhost nova_compute[297130]: 2025-10-05 10:05:33.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:33 localhost ovn_controller[157556]: 2025-10-05T10:05:33Z|00108|binding|INFO|Releasing lport 6bb15cab-c202-43a8-9910-1f6cd203e267 from this chassis (sb_readonly=0) Oct 5 06:05:33 localhost ovn_controller[157556]: 2025-10-05T10:05:33Z|00109|binding|INFO|Setting lport 6bb15cab-c202-43a8-9910-1f6cd203e267 down in Southbound Oct 5 06:05:33 localhost kernel: device tap6bb15cab-c2 left promiscuous mode Oct 5 06:05:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:33.794 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-d049dc2f-9ad1-442f-a7ed-40e48a021c99', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d049dc2f-9ad1-442f-a7ed-40e48a021c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '318dd9dd1a494c039b49e420f4b0eccb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed44175d-bb2f-4794-9594-5b77ddc6ee0f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6bb15cab-c202-43a8-9910-1f6cd203e267) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:33.796 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 6bb15cab-c202-43a8-9910-1f6cd203e267 in datapath d049dc2f-9ad1-442f-a7ed-40e48a021c99 unbound from our chassis#033[00m Oct 5 06:05:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:33.802 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d049dc2f-9ad1-442f-a7ed-40e48a021c99 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:33.803 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[b83899f4-d806-4db7-b847-3f1b377fd9a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:33 localhost nova_compute[297130]: 2025-10-05 10:05:33.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:33 localhost sshd[328485]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:05:34 localhost nova_compute[297130]: 2025-10-05 10:05:34.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:05:34 localhost podman[328501]: 2025-10-05 10:05:34.927969555 +0000 UTC m=+0.085106628 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:05:34 localhost podman[328501]: 2025-10-05 10:05:34.945230057 +0000 UTC m=+0.102367180 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 06:05:34 localhost dnsmasq[328025]: exiting on receipt of SIGTERM Oct 5 06:05:34 localhost podman[328514]: 2025-10-05 10:05:34.966411436 +0000 UTC m=+0.066941711 container kill 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:05:34 localhost systemd[1]: libpod-89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af.scope: Deactivated successfully. Oct 5 06:05:35 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:05:35 localhost podman[328534]: 2025-10-05 10:05:35.019825026 +0000 UTC m=+0.043191052 container died 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Oct 5 06:05:35 localhost podman[328534]: 2025-10-05 10:05:35.048830259 +0000 UTC m=+0.072196265 container cleanup 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:05:35 localhost systemd[1]: libpod-conmon-89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af.scope: Deactivated successfully. Oct 5 06:05:35 localhost podman[328541]: 2025-10-05 10:05:35.122990286 +0000 UTC m=+0.133015667 container remove 89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d049dc2f-9ad1-442f-a7ed-40e48a021c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.481 271653 INFO neutron.agent.dhcp.agent [None req-96d3201a-f45e-4a1e-bc66-523ae5e44595 - - - - - -] Synchronizing state#033[00m Oct 5 06:05:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.686 271653 INFO neutron.agent.dhcp.agent [None req-8f203918-17f9-4c69-ae0e-b611ace55eae - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.688 271653 INFO neutron.agent.dhcp.agent [-] Starting network 9e407b24-78e4-48e1-afe1-7ce4a3eaca7c dhcp configuration#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.688 271653 INFO neutron.agent.dhcp.agent [-] Finished network 9e407b24-78e4-48e1-afe1-7ce4a3eaca7c dhcp configuration#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.689 271653 INFO neutron.agent.dhcp.agent [-] Starting network 94c2e7b0-ab6c-42f9-ab1f-1ad3e44499ac dhcp configuration#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.689 271653 INFO neutron.agent.dhcp.agent [-] Finished network 94c2e7b0-ab6c-42f9-ab1f-1ad3e44499ac dhcp configuration#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.689 271653 INFO neutron.agent.dhcp.agent [-] Starting network d049dc2f-9ad1-442f-a7ed-40e48a021c99 dhcp configuration#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.690 271653 INFO neutron.agent.dhcp.agent [-] Finished network d049dc2f-9ad1-442f-a7ed-40e48a021c99 dhcp configuration#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.691 271653 INFO neutron.agent.dhcp.agent [None req-8f203918-17f9-4c69-ae0e-b611ace55eae - - - - - -] Synchronizing state complete#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.691 271653 INFO neutron.agent.dhcp.agent [None req-7acb171e-0770-4345-9624-116637634f3e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.692 271653 INFO neutron.agent.dhcp.agent [None req-7acb171e-0770-4345-9624-116637634f3e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.692 271653 INFO neutron.agent.dhcp.agent [None req-d9da4565-cc25-4af9-bb01-c86acc6ba360 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.693 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.693 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.785 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:35 localhost systemd[1]: var-lib-containers-storage-overlay-63e3db0c66094c6dd0cb4492dfab69bdee9c9f17d019adc089483e19714ea963-merged.mount: Deactivated successfully. Oct 5 06:05:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89b5bec5176ac29c54fd9bd689c5514efa96a2b1aef12c7074a03a22473460af-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:35 localhost systemd[1]: run-netns-qdhcp\x2dd049dc2f\x2d9ad1\x2d442f\x2da7ed\x2d40e48a021c99.mount: Deactivated successfully. Oct 5 06:05:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:35.975 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:36 localhost nova_compute[297130]: 2025-10-05 10:05:36.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.597288) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736597345, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1383, "num_deletes": 258, "total_data_size": 1793280, "memory_usage": 1821864, "flush_reason": "Manual Compaction"} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736607408, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 883208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19825, "largest_seqno": 21203, "table_properties": {"data_size": 878500, "index_size": 2179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12351, "raw_average_key_size": 21, "raw_value_size": 868278, "raw_average_value_size": 1504, "num_data_blocks": 95, "num_entries": 577, "num_filter_entries": 577, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658659, "oldest_key_time": 1759658659, "file_creation_time": 1759658736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 10172 microseconds, and 3713 cpu microseconds. Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.607457) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 883208 bytes OK Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.607486) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.610103) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.610129) EVENT_LOG_v1 {"time_micros": 1759658736610122, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.610149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1786665, prev total WAL file size 1786989, number of live WAL files 2. Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.611109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303035' seq:72057594037927935, type:22 .. '6D6772737461740034323537' seq:0, type:0; will stop at (end) Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(862KB)], [30(15MB)] Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736611162, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16954313, "oldest_snapshot_seqno": -1} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12428 keys, 15091738 bytes, temperature: kUnknown Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736704169, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 15091738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15024364, "index_size": 35209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 336036, "raw_average_key_size": 27, "raw_value_size": 14816104, "raw_average_value_size": 1192, "num_data_blocks": 1306, "num_entries": 12428, "num_filter_entries": 12428, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.704458) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 15091738 bytes Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.706196) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 162.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(36.3) write-amplify(17.1) OK, records in: 12926, records dropped: 498 output_compression: NoCompression Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.706216) EVENT_LOG_v1 {"time_micros": 1759658736706207, "job": 16, "event": "compaction_finished", "compaction_time_micros": 93082, "compaction_time_cpu_micros": 46140, "output_level": 6, "num_output_files": 1, "total_output_size": 15091738, "num_input_records": 12926, "num_output_records": 12428, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736706430, "job": 16, "event": "table_file_deletion", "file_number": 32} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658736708216, "job": 16, "event": "table_file_deletion", "file_number": 30} Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.611010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:36 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:05:36.708249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:05:37 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:37.217 2 INFO neutron.agent.securitygroups_rpc [None req-8727ea00-c2b3-4681-bd7e-288be6db83c4 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:05:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:37 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:37.766 2 INFO neutron.agent.securitygroups_rpc [None req-66916715-b26f-4f18-b705-715c7650e30e fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:38 localhost nova_compute[297130]: 2025-10-05 10:05:38.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:39 localhost nova_compute[297130]: 2025-10-05 10:05:39.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:39 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:39.949 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:38Z, description=, device_id=597e1948-f54c-4797-804a-a29e520d4291, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1c6a6b7d-6f8b-4f0b-8c7a-778b13c1350d, ip_allocation=immediate, mac_address=fa:16:3e:2f:ad:34, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1449, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:39Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:40 localhost systemd[1]: tmp-crun.9GGPrY.mount: Deactivated successfully. Oct 5 06:05:40 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:05:40 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:40 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:40 localhost podman[328584]: 2025-10-05 10:05:40.205844484 +0000 UTC m=+0.077238343 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:05:40 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:40.485 271653 INFO neutron.agent.dhcp.agent [None req-ce1c8084-ff2b-4d98-b609-3e14370d57ca - - - - - -] DHCP configuration for ports {'1c6a6b7d-6f8b-4f0b-8c7a-778b13c1350d'} is completed#033[00m Oct 5 06:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:05:40 localhost podman[328607]: 2025-10-05 10:05:40.923500211 +0000 UTC m=+0.091598995 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Oct 5 06:05:40 localhost podman[328607]: 2025-10-05 10:05:40.961340846 +0000 UTC m=+0.129439630 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:05:40 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:05:41 localhost systemd[1]: tmp-crun.oEIRiD.mount: Deactivated successfully. Oct 5 06:05:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:05:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:05:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:05:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:05:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:05:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:05:42 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:42.376 271653 INFO neutron.agent.linux.ip_lib [None req-6de1ed4c-83be-46b4-a259-a052b3aab34d - - - - - -] Device tape4963209-30 cannot be used as it has no MAC address#033[00m Oct 5 06:05:42 localhost nova_compute[297130]: 2025-10-05 10:05:42.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:42 localhost kernel: device tape4963209-30 entered promiscuous mode Oct 5 06:05:42 localhost NetworkManager[5970]: [1759658742.4139] manager: (tape4963209-30): new Generic device (/org/freedesktop/NetworkManager/Devices/26) Oct 5 06:05:42 localhost nova_compute[297130]: 2025-10-05 10:05:42.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:42 localhost ovn_controller[157556]: 2025-10-05T10:05:42Z|00110|binding|INFO|Claiming lport e4963209-3033-41c0-9581-f5b3f7cc7657 for this chassis. Oct 5 06:05:42 localhost ovn_controller[157556]: 2025-10-05T10:05:42Z|00111|binding|INFO|e4963209-3033-41c0-9581-f5b3f7cc7657: Claiming unknown Oct 5 06:05:42 localhost systemd-udevd[328636]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.426 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-37005d99-d901-4c50-9212-cb1a632d283b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37005d99-d901-4c50-9212-cb1a632d283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93d33813-8b39-4274-a0de-ce2b2eeea300, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e4963209-3033-41c0-9581-f5b3f7cc7657) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.428 163201 INFO neutron.agent.ovn.metadata.agent [-] Port e4963209-3033-41c0-9581-f5b3f7cc7657 in datapath 37005d99-d901-4c50-9212-cb1a632d283b bound to our chassis#033[00m Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.429 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 37005d99-d901-4c50-9212-cb1a632d283b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.431 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2301e5-e9f1-4dcf-8184-9d679661c4cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:42 localhost ovn_controller[157556]: 2025-10-05T10:05:42Z|00112|binding|INFO|Setting lport e4963209-3033-41c0-9581-f5b3f7cc7657 ovn-installed in OVS Oct 5 06:05:42 localhost ovn_controller[157556]: 2025-10-05T10:05:42Z|00113|binding|INFO|Setting lport e4963209-3033-41c0-9581-f5b3f7cc7657 up in Southbound Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost nova_compute[297130]: 2025-10-05 10:05:42.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost journal[237639]: ethtool ioctl error on tape4963209-30: No such device Oct 5 06:05:42 localhost nova_compute[297130]: 2025-10-05 10:05:42.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:42 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:42.504 2 INFO neutron.agent.securitygroups_rpc [None req-e2152d4c-0acb-4b85-9281-c419e6ddd1b9 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:42 localhost nova_compute[297130]: 2025-10-05 10:05:42.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:42 localhost ovn_controller[157556]: 2025-10-05T10:05:42Z|00114|binding|INFO|Removing iface tape4963209-30 ovn-installed in OVS Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.965 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port eabf964c-ba18-4a3f-8f14-9a1442e37ad9 with type ""#033[00m Oct 5 06:05:42 localhost ovn_controller[157556]: 2025-10-05T10:05:42Z|00115|binding|INFO|Removing lport e4963209-3033-41c0-9581-f5b3f7cc7657 ovn-installed in OVS Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.966 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-37005d99-d901-4c50-9212-cb1a632d283b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37005d99-d901-4c50-9212-cb1a632d283b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93d33813-8b39-4274-a0de-ce2b2eeea300, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e4963209-3033-41c0-9581-f5b3f7cc7657) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.968 163201 INFO neutron.agent.ovn.metadata.agent [-] Port e4963209-3033-41c0-9581-f5b3f7cc7657 in datapath 37005d99-d901-4c50-9212-cb1a632d283b unbound from our chassis#033[00m Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.970 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 37005d99-d901-4c50-9212-cb1a632d283b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:42.971 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6da381-e724-4a2d-95bd-2915bf0c0fbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:42 localhost nova_compute[297130]: 2025-10-05 10:05:42.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost podman[328704]: Oct 5 06:05:43 localhost nova_compute[297130]: 2025-10-05 10:05:43.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost podman[328704]: 2025-10-05 10:05:43.301607916 +0000 UTC m=+0.087308427 container create e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Oct 5 06:05:43 localhost systemd[1]: Started libpod-conmon-e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac.scope. Oct 5 06:05:43 localhost podman[328704]: 2025-10-05 10:05:43.260129832 +0000 UTC m=+0.045830393 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:43 localhost systemd[1]: tmp-crun.NRvPkp.mount: Deactivated successfully. Oct 5 06:05:43 localhost systemd[1]: Started libcrun container. Oct 5 06:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3a2d048c4f0f139e6816c331096284f4949a336a4aac52614c9aaceb920b4de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:43 localhost podman[328704]: 2025-10-05 10:05:43.398623189 +0000 UTC m=+0.184323710 container init e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:05:43 localhost podman[328704]: 2025-10-05 10:05:43.412284401 +0000 UTC m=+0.197984932 container start e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:05:43 localhost dnsmasq[328723]: started, version 2.85 cachesize 150 Oct 5 06:05:43 localhost dnsmasq[328723]: DNS service limited to local subnets Oct 5 06:05:43 localhost dnsmasq[328723]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:43 localhost dnsmasq[328723]: warning: no upstream servers configured Oct 5 06:05:43 localhost dnsmasq-dhcp[328723]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:43 localhost dnsmasq[328723]: read /var/lib/neutron/dhcp/37005d99-d901-4c50-9212-cb1a632d283b/addn_hosts - 0 addresses Oct 5 06:05:43 localhost dnsmasq-dhcp[328723]: read /var/lib/neutron/dhcp/37005d99-d901-4c50-9212-cb1a632d283b/host Oct 5 06:05:43 localhost dnsmasq-dhcp[328723]: read /var/lib/neutron/dhcp/37005d99-d901-4c50-9212-cb1a632d283b/opts Oct 5 06:05:43 localhost nova_compute[297130]: 2025-10-05 10:05:43.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost kernel: device tape4963209-30 left promiscuous mode Oct 5 06:05:43 localhost nova_compute[297130]: 2025-10-05 10:05:43.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:43.568 271653 INFO neutron.agent.dhcp.agent [None req-caba002d-a958-407f-a851-fabd43fe26b8 - - - - - -] DHCP configuration for ports {'ae155e7a-125e-4349-b16c-9a5a2890eb49'} is completed#033[00m Oct 5 06:05:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:43 localhost sshd[328726]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:05:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:44 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:44.011 2 INFO neutron.agent.securitygroups_rpc [None req-02192b33-0053-4542-85f0-0b2170402ee7 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:05:44 localhost dnsmasq[328723]: read /var/lib/neutron/dhcp/37005d99-d901-4c50-9212-cb1a632d283b/addn_hosts - 0 addresses Oct 5 06:05:44 localhost dnsmasq-dhcp[328723]: read /var/lib/neutron/dhcp/37005d99-d901-4c50-9212-cb1a632d283b/host Oct 5 06:05:44 localhost dnsmasq-dhcp[328723]: read /var/lib/neutron/dhcp/37005d99-d901-4c50-9212-cb1a632d283b/opts Oct 5 06:05:44 localhost podman[328745]: 2025-10-05 10:05:44.084499806 +0000 UTC m=+0.047475878 container kill e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 37005d99-d901-4c50-9212-cb1a632d283b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape4963209-30 not found in namespace qdhcp-37005d99-d901-4c50-9212-cb1a632d283b. Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent return fut.result() Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent raise self._exception Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape4963209-30 not found in namespace qdhcp-37005d99-d901-4c50-9212-cb1a632d283b. Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.105 271653 ERROR neutron.agent.dhcp.agent #033[00m Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.110 271653 INFO neutron.agent.dhcp.agent [None req-8f203918-17f9-4c69-ae0e-b611ace55eae - - - - - -] Synchronizing state#033[00m Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.257 271653 INFO neutron.agent.dhcp.agent [None req-a939a953-f502-4899-aa58-421095146a67 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.258 271653 INFO neutron.agent.dhcp.agent [-] Starting network 37005d99-d901-4c50-9212-cb1a632d283b dhcp configuration#033[00m Oct 5 06:05:44 localhost systemd[1]: tmp-crun.1Wdtzy.mount: Deactivated successfully. Oct 5 06:05:44 localhost dnsmasq[328723]: exiting on receipt of SIGTERM Oct 5 06:05:44 localhost podman[328776]: 2025-10-05 10:05:44.431638225 +0000 UTC m=+0.060476884 container kill e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:44 localhost systemd[1]: tmp-crun.oYqHjN.mount: Deactivated successfully. Oct 5 06:05:44 localhost systemd[1]: libpod-e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac.scope: Deactivated successfully. Oct 5 06:05:44 localhost podman[328791]: 2025-10-05 10:05:44.503556681 +0000 UTC m=+0.051114938 container died e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:05:44 localhost podman[328791]: 2025-10-05 10:05:44.610083783 +0000 UTC m=+0.157642020 container remove e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-37005d99-d901-4c50-9212-cb1a632d283b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:05:44 localhost systemd[1]: libpod-conmon-e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac.scope: Deactivated successfully. Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.699 271653 INFO neutron.agent.dhcp.agent [None req-c13f4d9c-75d9-4bb0-bd60-1eacac6dad99 - - - - - -] Finished network 37005d99-d901-4c50-9212-cb1a632d283b dhcp configuration#033[00m Oct 5 06:05:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:44.701 271653 INFO neutron.agent.dhcp.agent [None req-a939a953-f502-4899-aa58-421095146a67 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:05:44 localhost nova_compute[297130]: 2025-10-05 10:05:44.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:45 localhost nova_compute[297130]: 2025-10-05 10:05:45.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:45 localhost systemd[1]: tmp-crun.kFSlnR.mount: Deactivated successfully. Oct 5 06:05:45 localhost systemd[1]: var-lib-containers-storage-overlay-c3a2d048c4f0f139e6816c331096284f4949a336a4aac52614c9aaceb920b4de-merged.mount: Deactivated successfully. Oct 5 06:05:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e34347a85fc6e91bce19a754dc08caf54c9b21c1ca71986c0812d2d27c1cafac-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:45 localhost systemd[1]: run-netns-qdhcp\x2d37005d99\x2dd901\x2d4c50\x2d9212\x2dcb1a632d283b.mount: Deactivated successfully. Oct 5 06:05:45 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:05:45 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:45 localhost podman[328835]: 2025-10-05 10:05:45.637926718 +0000 UTC m=+0.048865186 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:45 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:46 localhost openstack_network_exporter[250246]: ERROR 10:05:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:46 localhost openstack_network_exporter[250246]: ERROR 10:05:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:05:46 localhost openstack_network_exporter[250246]: ERROR 10:05:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:05:46 localhost openstack_network_exporter[250246]: ERROR 10:05:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:05:46 localhost openstack_network_exporter[250246]: Oct 5 06:05:46 localhost openstack_network_exporter[250246]: ERROR 10:05:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:05:46 localhost openstack_network_exporter[250246]: Oct 5 06:05:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:05:46.971 2 INFO neutron.agent.securitygroups_rpc [None req-f585fb86-eb3a-4f02-92d6-e24b1dd983df fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:05:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:47 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:47.722 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:47Z, description=, device_id=4b699f59-531d-4979-b42d-2e2e41b1cbc2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=126d2b30-feda-4cef-a0af-b420597afba8, ip_allocation=immediate, mac_address=fa:16:3e:3d:9f:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1485, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:47Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:05:47 localhost systemd[1]: tmp-crun.0dMYdq.mount: Deactivated successfully. Oct 5 06:05:47 localhost podman[328866]: 2025-10-05 10:05:47.971521687 +0000 UTC m=+0.137322724 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:47 localhost podman[328868]: 2025-10-05 10:05:47.943116541 +0000 UTC m=+0.102935225 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:05:48 localhost podman[328866]: 2025-10-05 10:05:48.006160954 +0000 UTC m=+0.171961951 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:48 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:05:48 localhost podman[328868]: 2025-10-05 10:05:48.022698666 +0000 UTC m=+0.182517320 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:05:48 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:05:48 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:05:48 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:48 localhost podman[328896]: 2025-10-05 10:05:48.060682755 +0000 UTC m=+0.160840208 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:05:48 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:48 localhost nova_compute[297130]: 2025-10-05 10:05:48.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:48 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:48.356 271653 INFO neutron.agent.dhcp.agent [None req-370d9e86-f383-4462-bccd-a768509f471d - - - - - -] DHCP configuration for ports {'126d2b30-feda-4cef-a0af-b420597afba8'} is completed#033[00m Oct 5 06:05:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:49 localhost nova_compute[297130]: 2025-10-05 10:05:49.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:50 localhost nova_compute[297130]: 2025-10-05 10:05:50.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:51 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:51.206 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:50Z, description=, device_id=e1802c85-2de9-4c86-9df5-b1f147e1b88f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7d835122-8eca-42d7-9bff-23b7be0fbc24, ip_allocation=immediate, mac_address=fa:16:3e:5d:74:23, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1508, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:51Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:51 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:05:51 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:51 localhost podman[328956]: 2025-10-05 10:05:51.435071252 +0000 UTC m=+0.055363004 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:51 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:05:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:05:51 localhost systemd[1]: tmp-crun.hZsQ9R.mount: Deactivated successfully. Oct 5 06:05:51 localhost systemd[1]: tmp-crun.aQFMH5.mount: Deactivated successfully. Oct 5 06:05:51 localhost podman[328970]: 2025-10-05 10:05:51.544447751 +0000 UTC m=+0.091272095 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid) Oct 5 06:05:51 localhost podman[328970]: 2025-10-05 10:05:51.581236458 +0000 UTC m=+0.128060832 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:05:51 localhost podman[328971]: 2025-10-05 10:05:51.604693279 +0000 UTC m=+0.148027318 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:05:51 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:05:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:51 localhost podman[328971]: 2025-10-05 10:05:51.653378089 +0000 UTC m=+0.196712138 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 5 06:05:51 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:05:51 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:51.741 271653 INFO neutron.agent.dhcp.agent [None req-cd9454ae-b4d6-444d-ad12-9c519417b280 - - - - - -] DHCP configuration for ports {'7d835122-8eca-42d7-9bff-23b7be0fbc24'} is completed#033[00m Oct 5 06:05:52 localhost nova_compute[297130]: 2025-10-05 10:05:52.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:53 localhost nova_compute[297130]: 2025-10-05 10:05:53.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:53 localhost sshd[329020]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:05:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:53 localhost nova_compute[297130]: 2025-10-05 10:05:53.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:53 localhost systemd[1]: tmp-crun.H5w93q.mount: Deactivated successfully. Oct 5 06:05:53 localhost podman[329039]: 2025-10-05 10:05:53.971809933 +0000 UTC m=+0.053649117 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:05:53 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:05:53 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:53 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:54.348 271653 INFO neutron.agent.linux.ip_lib [None req-d3064ed4-fb4a-459c-b59f-6a511392bbf3 - - - - - -] Device tap12d465d7-09 cannot be used as it has no MAC address#033[00m Oct 5 06:05:54 localhost nova_compute[297130]: 2025-10-05 10:05:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:54 localhost kernel: device tap12d465d7-09 entered promiscuous mode Oct 5 06:05:54 localhost NetworkManager[5970]: [1759658754.4186] manager: (tap12d465d7-09): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Oct 5 06:05:54 localhost nova_compute[297130]: 2025-10-05 10:05:54.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:54 localhost ovn_controller[157556]: 2025-10-05T10:05:54Z|00116|binding|INFO|Claiming lport 12d465d7-0976-42ac-8ca7-99de5248b25c for this chassis. Oct 5 06:05:54 localhost ovn_controller[157556]: 2025-10-05T10:05:54Z|00117|binding|INFO|12d465d7-0976-42ac-8ca7-99de5248b25c: Claiming unknown Oct 5 06:05:54 localhost systemd-udevd[329069]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:05:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:54.439 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-df7a2d03-b191-45b1-a183-2f789bc978a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df7a2d03-b191-45b1-a183-2f789bc978a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e38d16b31a8e4ad18dabb5df8c62f1c6', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06cb7391-dec3-4cca-ab02-ac3643e95def, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=12d465d7-0976-42ac-8ca7-99de5248b25c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:54.440 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 12d465d7-0976-42ac-8ca7-99de5248b25c in datapath df7a2d03-b191-45b1-a183-2f789bc978a5 bound to our chassis#033[00m Oct 5 06:05:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:54.442 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df7a2d03-b191-45b1-a183-2f789bc978a5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:54.442 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7f7f69-8055-489e-b4c2-133e2b5c5e8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost ovn_controller[157556]: 2025-10-05T10:05:54Z|00118|binding|INFO|Setting lport 12d465d7-0976-42ac-8ca7-99de5248b25c ovn-installed in OVS Oct 5 06:05:54 localhost ovn_controller[157556]: 2025-10-05T10:05:54Z|00119|binding|INFO|Setting lport 12d465d7-0976-42ac-8ca7-99de5248b25c up in Southbound Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost nova_compute[297130]: 2025-10-05 10:05:54.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost journal[237639]: ethtool ioctl error on tap12d465d7-09: No such device Oct 5 06:05:54 localhost nova_compute[297130]: 2025-10-05 10:05:54.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:54 localhost nova_compute[297130]: 2025-10-05 10:05:54.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:54 localhost nova_compute[297130]: 2025-10-05 10:05:54.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:05:55 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:55 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:55 localhost podman[329134]: 2025-10-05 10:05:55.26969545 +0000 UTC m=+0.065817299 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost podman[329178]: Oct 5 06:05:55 localhost podman[329178]: 2025-10-05 10:05:55.533251454 +0000 UTC m=+0.094602607 container create 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:55 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:55.570 271653 INFO neutron.agent.linux.ip_lib [None req-06237c82-374a-4ddb-a63e-cb8ed48c4ea8 - - - - - -] Device tap4bc8173f-5c cannot be used as it has no MAC address#033[00m Oct 5 06:05:55 localhost systemd[1]: Started libpod-conmon-77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639.scope. Oct 5 06:05:55 localhost podman[329178]: 2025-10-05 10:05:55.48916882 +0000 UTC m=+0.050519993 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:55 localhost systemd[1]: Started libcrun container. Oct 5 06:05:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6b23e3b9f5746ed755ea771bbcc72105048f3c856f9710ae8fd6e8c2b27952/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:55 localhost podman[329178]: 2025-10-05 10:05:55.633079433 +0000 UTC m=+0.194430586 container init 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:05:55 localhost podman[329178]: 2025-10-05 10:05:55.643494528 +0000 UTC m=+0.204845691 container start 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:05:55 localhost dnsmasq[329204]: started, version 2.85 cachesize 150 Oct 5 06:05:55 localhost dnsmasq[329204]: DNS service limited to local subnets Oct 5 06:05:55 localhost dnsmasq[329204]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:55 localhost dnsmasq[329204]: warning: no upstream servers configured Oct 5 06:05:55 localhost dnsmasq-dhcp[329204]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:05:55 localhost dnsmasq[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/addn_hosts - 0 addresses Oct 5 06:05:55 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/host Oct 5 06:05:55 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/opts Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost kernel: device tap4bc8173f-5c entered promiscuous mode Oct 5 06:05:55 localhost NetworkManager[5970]: [1759658755.7399] manager: (tap4bc8173f-5c): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Oct 5 06:05:55 localhost ovn_controller[157556]: 2025-10-05T10:05:55Z|00120|binding|INFO|Claiming lport 4bc8173f-5c3c-4588-a7e7-b044bb35461a for this chassis. Oct 5 06:05:55 localhost ovn_controller[157556]: 2025-10-05T10:05:55Z|00121|binding|INFO|4bc8173f-5c3c-4588-a7e7-b044bb35461a: Claiming unknown Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost ovn_controller[157556]: 2025-10-05T10:05:55Z|00122|binding|INFO|Setting lport 4bc8173f-5c3c-4588-a7e7-b044bb35461a ovn-installed in OVS Oct 5 06:05:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:55.759 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-e7814bf0-4429-4174-bc57-7951dd1eed4b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7814bf0-4429-4174-bc57-7951dd1eed4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ed379ee-07c2-41f0-8c2d-eab070a4163c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4bc8173f-5c3c-4588-a7e7-b044bb35461a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:55 localhost ovn_controller[157556]: 2025-10-05T10:05:55Z|00123|binding|INFO|Setting lport 4bc8173f-5c3c-4588-a7e7-b044bb35461a up in Southbound Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:55.763 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 4bc8173f-5c3c-4588-a7e7-b044bb35461a in datapath e7814bf0-4429-4174-bc57-7951dd1eed4b bound to our chassis#033[00m Oct 5 06:05:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:55.764 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e7814bf0-4429-4174-bc57-7951dd1eed4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:55.765 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ef7cc8-4750-4559-8721-1bce926a5ece]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost nova_compute[297130]: 2025-10-05 10:05:55.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:55 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:55.857 271653 INFO neutron.agent.dhcp.agent [None req-014a6517-d5f7-4d0f-b7f5-4f3cf67e6c9e - - - - - -] DHCP configuration for ports {'c5b6e0b1-9358-43cb-917e-d784b6053f07'} is completed#033[00m Oct 5 06:05:56 localhost podman[248157]: time="2025-10-05T10:05:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:05:56 localhost podman[248157]: @ - - [05/Oct/2025:10:05:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148140 "" "Go-http-client/1.1" Oct 5 06:05:56 localhost podman[248157]: @ - - [05/Oct/2025:10:05:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19797 "" "Go-http-client/1.1" Oct 5 06:05:56 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:56.474 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:56Z, description=, device_id=5b4337ec-c479-4d14-b6e8-51c5ccb8c8de, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=335e3b8b-903f-494b-973c-4402f73faca8, ip_allocation=immediate, mac_address=fa:16:3e:dc:80:fc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1552, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:56Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:56 localhost podman[329276]: Oct 5 06:05:56 localhost podman[329276]: 2025-10-05 10:05:56.712267717 +0000 UTC m=+0.081997854 container create 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:05:56 localhost systemd[1]: Started libpod-conmon-6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea.scope. Oct 5 06:05:56 localhost podman[329276]: 2025-10-05 10:05:56.672260123 +0000 UTC m=+0.041990260 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:56 localhost systemd[1]: Started libcrun container. Oct 5 06:05:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be2eb15796e491ca575d74ee0fcc298187694c100777a375726b09e17a10663/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:56 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:05:56 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:56 localhost podman[329288]: 2025-10-05 10:05:56.786703696 +0000 UTC m=+0.118108333 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:05:56 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:56 localhost podman[329276]: 2025-10-05 10:05:56.799350928 +0000 UTC m=+0.169081065 container init 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:05:56 localhost podman[329276]: 2025-10-05 10:05:56.81192427 +0000 UTC m=+0.181654407 container start 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:05:56 localhost dnsmasq[329309]: started, version 2.85 cachesize 150 Oct 5 06:05:56 localhost dnsmasq[329309]: DNS service limited to local subnets Oct 5 06:05:56 localhost dnsmasq[329309]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:56 localhost dnsmasq[329309]: warning: no upstream servers configured Oct 5 06:05:56 localhost dnsmasq-dhcp[329309]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:05:56 localhost dnsmasq[329309]: read /var/lib/neutron/dhcp/e7814bf0-4429-4174-bc57-7951dd1eed4b/addn_hosts - 0 addresses Oct 5 06:05:56 localhost dnsmasq-dhcp[329309]: read /var/lib/neutron/dhcp/e7814bf0-4429-4174-bc57-7951dd1eed4b/host Oct 5 06:05:56 localhost dnsmasq-dhcp[329309]: read /var/lib/neutron/dhcp/e7814bf0-4429-4174-bc57-7951dd1eed4b/opts Oct 5 06:05:56 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:56.945 271653 INFO neutron.agent.dhcp.agent [None req-70067bfe-b3cd-41d4-a0ff-e64269881f50 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:56Z, description=, device_id=1e78ac0f-6a29-495d-bf00-520e59fa6a38, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1a1ebe4e-5b2b-4338-a7a8-3454916fa16d, ip_allocation=immediate, mac_address=fa:16:3e:f8:e6:23, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1553, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:05:56Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:05:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:57.165 271653 INFO neutron.agent.dhcp.agent [None req-b80300fe-26d6-4de0-8264-6b641b1e02e9 - - - - - -] DHCP configuration for ports {'335e3b8b-903f-494b-973c-4402f73faca8', '73c364f3-c3b0-4e1b-95df-a817bc8e84c4'} is completed#033[00m Oct 5 06:05:57 localhost dnsmasq[329309]: exiting on receipt of SIGTERM Oct 5 06:05:57 localhost podman[329334]: 2025-10-05 10:05:57.180358328 +0000 UTC m=+0.067775238 container kill 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:05:57 localhost systemd[1]: libpod-6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea.scope: Deactivated successfully. Oct 5 06:05:57 localhost podman[329360]: 2025-10-05 10:05:57.260923613 +0000 UTC m=+0.064525380 container died 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:05:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea-userdata-shm.mount: Deactivated successfully. Oct 5 06:05:57 localhost systemd[1]: var-lib-containers-storage-overlay-4be2eb15796e491ca575d74ee0fcc298187694c100777a375726b09e17a10663-merged.mount: Deactivated successfully. Oct 5 06:05:57 localhost podman[329360]: 2025-10-05 10:05:57.295982834 +0000 UTC m=+0.099584551 container cleanup 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:57 localhost systemd[1]: libpod-conmon-6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea.scope: Deactivated successfully. Oct 5 06:05:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:57.313 271653 INFO neutron.agent.linux.ip_lib [None req-67b66a6b-cc6d-4531-870f-e33bb45b12c3 - - - - - -] Device tape477820b-a1 cannot be used as it has no MAC address#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost kernel: device tape477820b-a1 entered promiscuous mode Oct 5 06:05:57 localhost NetworkManager[5970]: [1759658757.3368] manager: (tape477820b-a1): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Oct 5 06:05:57 localhost ovn_controller[157556]: 2025-10-05T10:05:57Z|00124|binding|INFO|Claiming lport e477820b-a10a-448c-8977-3bd6b09123f4 for this chassis. Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost ovn_controller[157556]: 2025-10-05T10:05:57Z|00125|binding|INFO|e477820b-a10a-448c-8977-3bd6b09123f4: Claiming unknown Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.350 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-0ad0d8a8-9181-4949-b23f-21403c4b400d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ad0d8a8-9181-4949-b23f-21403c4b400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ca8a457-d697-45dd-876e-0f87cd06a20e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e477820b-a10a-448c-8977-3bd6b09123f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.351 163201 INFO neutron.agent.ovn.metadata.agent [-] Port e477820b-a10a-448c-8977-3bd6b09123f4 in datapath 0ad0d8a8-9181-4949-b23f-21403c4b400d bound to our chassis#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.353 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6cfd25fa-3b76-4017-8109-8509ea9fbff6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.353 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ad0d8a8-9181-4949-b23f-21403c4b400d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.354 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[f0bfa2de-96f3-4da4-b55c-10a3e2466255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:57 localhost ovn_controller[157556]: 2025-10-05T10:05:57Z|00126|binding|INFO|Removing iface tap4bc8173f-5c ovn-installed in OVS Oct 5 06:05:57 localhost ovn_controller[157556]: 2025-10-05T10:05:57Z|00127|binding|INFO|Removing lport 4bc8173f-5c3c-4588-a7e7-b044bb35461a ovn-installed in OVS Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.359 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 11d8f35d-a5c5-46b5-80be-9a9857724719 with type ""#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.360 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-e7814bf0-4429-4174-bc57-7951dd1eed4b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7814bf0-4429-4174-bc57-7951dd1eed4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b19cb2ed6df34a0dad27155d804f6680', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ed379ee-07c2-41f0-8c2d-eab070a4163c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4bc8173f-5c3c-4588-a7e7-b044bb35461a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.361 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 4bc8173f-5c3c-4588-a7e7-b044bb35461a in datapath e7814bf0-4429-4174-bc57-7951dd1eed4b unbound from our chassis#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.362 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e7814bf0-4429-4174-bc57-7951dd1eed4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:05:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:05:57.362 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcc294a-a5a8-487e-a5ee-124125ee7eb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost podman[329362]: 2025-10-05 10:05:57.383395554 +0000 UTC m=+0.178684686 container remove 6c1b2513761439ce3414d5824b0d1aa095aaf30ac685793929347c0977f642ea (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e7814bf0-4429-4174-bc57-7951dd1eed4b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:57 localhost ovn_controller[157556]: 2025-10-05T10:05:57Z|00128|binding|INFO|Setting lport e477820b-a10a-448c-8977-3bd6b09123f4 ovn-installed in OVS Oct 5 06:05:57 localhost ovn_controller[157556]: 2025-10-05T10:05:57Z|00129|binding|INFO|Setting lport e477820b-a10a-448c-8977-3bd6b09123f4 up in Southbound Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost kernel: device tap4bc8173f-5c left promiscuous mode Oct 5 06:05:57 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:05:57 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:05:57 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:05:57 localhost podman[329382]: 2025-10-05 10:05:57.414900288 +0000 UTC m=+0.179005135 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:57.439 271653 INFO neutron.agent.dhcp.agent [None req-2c3103df-45a6-469d-9746-e8979f420ac3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:57.440 271653 INFO neutron.agent.dhcp.agent [None req-2c3103df-45a6-469d-9746-e8979f420ac3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:57 localhost nova_compute[297130]: 2025-10-05 10:05:57.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:57.777 271653 INFO neutron.agent.dhcp.agent [None req-7c6fa450-fc32-4c90-8f31-ad4f0250cc92 - - - - - -] DHCP configuration for ports {'1a1ebe4e-5b2b-4338-a7a8-3454916fa16d'} is completed#033[00m Oct 5 06:05:58 localhost systemd[1]: run-netns-qdhcp\x2de7814bf0\x2d4429\x2d4174\x2dbc57\x2d7951dd1eed4b.mount: Deactivated successfully. Oct 5 06:05:58 localhost nova_compute[297130]: 2025-10-05 10:05:58.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:58 localhost podman[329475]: Oct 5 06:05:58 localhost podman[329475]: 2025-10-05 10:05:58.447023131 +0000 UTC m=+0.126984254 container create 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:05:58 localhost systemd[1]: Started libpod-conmon-96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2.scope. Oct 5 06:05:58 localhost podman[329475]: 2025-10-05 10:05:58.403356757 +0000 UTC m=+0.083317850 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:05:58 localhost systemd[1]: Started libcrun container. Oct 5 06:05:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ca9207a939874399cbd2f6322206f9447e8e8512071508169e2554569f3a01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:05:58 localhost podman[329475]: 2025-10-05 10:05:58.521301035 +0000 UTC m=+0.201262118 container init 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:05:58 localhost podman[329475]: 2025-10-05 10:05:58.536266941 +0000 UTC m=+0.216228004 container start 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:05:58 localhost dnsmasq[329493]: started, version 2.85 cachesize 150 Oct 5 06:05:58 localhost dnsmasq[329493]: DNS service limited to local subnets Oct 5 06:05:58 localhost dnsmasq[329493]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:05:58 localhost dnsmasq[329493]: warning: no upstream servers configured Oct 5 06:05:58 localhost dnsmasq-dhcp[329493]: DHCP, static leases only on 10.101.0.0, lease time 1d Oct 5 06:05:58 localhost dnsmasq[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/addn_hosts - 0 addresses Oct 5 06:05:58 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/host Oct 5 06:05:58 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/opts Oct 5 06:05:58 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:58.726 271653 INFO neutron.agent.dhcp.agent [None req-8f645872-dd32-4e5c-9226-cf830dd32875 - - - - - -] DHCP configuration for ports {'207b5c34-a8f8-49d7-96ad-983277876548'} is completed#033[00m Oct 5 06:05:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:05:59 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:59.403 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:58Z, description=, device_id=5b4337ec-c479-4d14-b6e8-51c5ccb8c8de, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c3d49c40-d1c9-48e5-9b05-404a3707876e, ip_allocation=immediate, mac_address=fa:16:3e:3d:22:c3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:51Z, description=, dns_domain=, id=df7a2d03-b191-45b1-a183-2f789bc978a5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1788462047, port_security_enabled=True, project_id=e38d16b31a8e4ad18dabb5df8c62f1c6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38058, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1509, status=ACTIVE, subnets=['4568ffbd-8d88-4338-a876-8d448739b273'], tags=[], tenant_id=e38d16b31a8e4ad18dabb5df8c62f1c6, updated_at=2025-10-05T10:05:52Z, vlan_transparent=None, network_id=df7a2d03-b191-45b1-a183-2f789bc978a5, port_security_enabled=False, project_id=e38d16b31a8e4ad18dabb5df8c62f1c6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1574, status=DOWN, tags=[], tenant_id=e38d16b31a8e4ad18dabb5df8c62f1c6, updated_at=2025-10-05T10:05:58Z on network df7a2d03-b191-45b1-a183-2f789bc978a5#033[00m Oct 5 06:05:59 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:05:59.527 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:59Z, description=, device_id=3c24e571-b9a8-4337-8807-1f883fe89068, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=66f38bf4-176e-4447-b683-4998c6ade1a9, ip_allocation=immediate, mac_address=fa:16:3e:43:1a:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:53Z, description=, dns_domain=, id=0ad0d8a8-9181-4949-b23f-21403c4b400d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1789629030, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58136, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1532, status=ACTIVE, subnets=['f179b26d-ec73-4780-a7e8-7019e0424022'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:05:55Z, vlan_transparent=None, network_id=0ad0d8a8-9181-4949-b23f-21403c4b400d, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1576, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:05:59Z on network 0ad0d8a8-9181-4949-b23f-21403c4b400d#033[00m Oct 5 06:05:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:05:59 localhost dnsmasq[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/addn_hosts - 1 addresses Oct 5 06:05:59 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/host Oct 5 06:05:59 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/opts Oct 5 06:05:59 localhost podman[329512]: 2025-10-05 10:05:59.661886539 +0000 UTC m=+0.088901701 container kill 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:05:59 localhost systemd[1]: tmp-crun.xDRlCL.mount: Deactivated successfully. Oct 5 06:05:59 localhost nova_compute[297130]: 2025-10-05 10:05:59.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:05:59 localhost dnsmasq[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/addn_hosts - 1 addresses Oct 5 06:05:59 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/host Oct 5 06:05:59 localhost podman[329545]: 2025-10-05 10:05:59.821702482 +0000 UTC m=+0.064749896 container kill 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:05:59 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/opts Oct 5 06:06:00 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:00.032 271653 INFO neutron.agent.dhcp.agent [None req-401e9231-4dcf-43dd-890c-e560057ba14a - - - - - -] DHCP configuration for ports {'c3d49c40-d1c9-48e5-9b05-404a3707876e'} is completed#033[00m Oct 5 06:06:00 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:00.199 271653 INFO neutron.agent.dhcp.agent [None req-0a9a2de4-956c-4ba5-b5db-9de78630cc56 - - - - - -] DHCP configuration for ports {'66f38bf4-176e-4447-b683-4998c6ade1a9'} is completed#033[00m Oct 5 06:06:00 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:00.964 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:59Z, description=, device_id=3c24e571-b9a8-4337-8807-1f883fe89068, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=66f38bf4-176e-4447-b683-4998c6ade1a9, ip_allocation=immediate, mac_address=fa:16:3e:43:1a:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:53Z, description=, dns_domain=, id=0ad0d8a8-9181-4949-b23f-21403c4b400d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1789629030, port_security_enabled=True, project_id=27e03170fdbf44268868a90d25e4e944, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58136, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1532, status=ACTIVE, subnets=['f179b26d-ec73-4780-a7e8-7019e0424022'], tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:05:55Z, vlan_transparent=None, network_id=0ad0d8a8-9181-4949-b23f-21403c4b400d, port_security_enabled=False, project_id=27e03170fdbf44268868a90d25e4e944, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1576, status=DOWN, tags=[], tenant_id=27e03170fdbf44268868a90d25e4e944, updated_at=2025-10-05T10:05:59Z on network 0ad0d8a8-9181-4949-b23f-21403c4b400d#033[00m Oct 5 06:06:01 localhost dnsmasq[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/addn_hosts - 1 addresses Oct 5 06:06:01 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/host Oct 5 06:06:01 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/opts Oct 5 06:06:01 localhost podman[329588]: 2025-10-05 10:06:01.191921092 +0000 UTC m=+0.064088038 container kill 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:01.469 271653 INFO neutron.agent.dhcp.agent [None req-884f1fdd-685f-4ab5-833f-411ebfe5263f - - - - - -] DHCP configuration for ports {'66f38bf4-176e-4447-b683-4998c6ade1a9'} is completed#033[00m Oct 5 06:06:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:01.548 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:05:58Z, description=, device_id=5b4337ec-c479-4d14-b6e8-51c5ccb8c8de, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c3d49c40-d1c9-48e5-9b05-404a3707876e, ip_allocation=immediate, mac_address=fa:16:3e:3d:22:c3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:51Z, description=, dns_domain=, id=df7a2d03-b191-45b1-a183-2f789bc978a5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1788462047, port_security_enabled=True, project_id=e38d16b31a8e4ad18dabb5df8c62f1c6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38058, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1509, status=ACTIVE, subnets=['4568ffbd-8d88-4338-a876-8d448739b273'], tags=[], tenant_id=e38d16b31a8e4ad18dabb5df8c62f1c6, updated_at=2025-10-05T10:05:52Z, vlan_transparent=None, network_id=df7a2d03-b191-45b1-a183-2f789bc978a5, port_security_enabled=False, project_id=e38d16b31a8e4ad18dabb5df8c62f1c6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1574, status=DOWN, tags=[], tenant_id=e38d16b31a8e4ad18dabb5df8c62f1c6, updated_at=2025-10-05T10:05:58Z on network df7a2d03-b191-45b1-a183-2f789bc978a5#033[00m Oct 5 06:06:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:06:01 localhost podman[329637]: 2025-10-05 10:06:01.895006175 +0000 UTC m=+0.060310336 container kill 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:01 localhost dnsmasq[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/addn_hosts - 1 addresses Oct 5 06:06:01 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/host Oct 5 06:06:01 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/opts Oct 5 06:06:01 localhost dnsmasq[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/addn_hosts - 0 addresses Oct 5 06:06:01 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/host Oct 5 06:06:01 localhost dnsmasq-dhcp[329493]: read /var/lib/neutron/dhcp/0ad0d8a8-9181-4949-b23f-21403c4b400d/opts Oct 5 06:06:01 localhost podman[329649]: 2025-10-05 10:06:01.935948895 +0000 UTC m=+0.068719505 container kill 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:02 localhost ovn_controller[157556]: 2025-10-05T10:06:02Z|00130|binding|INFO|Releasing lport e477820b-a10a-448c-8977-3bd6b09123f4 from this chassis (sb_readonly=0) Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost ovn_controller[157556]: 2025-10-05T10:06:02Z|00131|binding|INFO|Setting lport e477820b-a10a-448c-8977-3bd6b09123f4 down in Southbound Oct 5 06:06:02 localhost kernel: device tape477820b-a1 left promiscuous mode Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.190 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-0ad0d8a8-9181-4949-b23f-21403c4b400d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ad0d8a8-9181-4949-b23f-21403c4b400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27e03170fdbf44268868a90d25e4e944', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ca8a457-d697-45dd-876e-0f87cd06a20e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e477820b-a10a-448c-8977-3bd6b09123f4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.192 163201 INFO neutron.agent.ovn.metadata.agent [-] Port e477820b-a10a-448c-8977-3bd6b09123f4 in datapath 0ad0d8a8-9181-4949-b23f-21403c4b400d unbound from our chassis#033[00m Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.197 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ad0d8a8-9181-4949-b23f-21403c4b400d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.198 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[aa28ce30-a44c-4864-ac1c-31178cd10547]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:02 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:02.242 271653 INFO neutron.agent.dhcp.agent [None req-f706e8e4-54c3-42fa-a870-3137f1295642 - - - - - -] DHCP configuration for ports {'c3d49c40-d1c9-48e5-9b05-404a3707876e'} is completed#033[00m Oct 5 06:06:02 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:06:02 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:02 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:02 localhost podman[329695]: 2025-10-05 10:06:02.251983854 +0000 UTC m=+0.062553918 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:06:02 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:02.530 271653 INFO neutron.agent.linux.ip_lib [None req-8db297dc-333f-416b-8523-651cece62437 - - - - - -] Device tap10ffbf77-73 cannot be used as it has no MAC address#033[00m Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost kernel: device tap10ffbf77-73 entered promiscuous mode Oct 5 06:06:02 localhost ovn_controller[157556]: 2025-10-05T10:06:02Z|00132|binding|INFO|Claiming lport 10ffbf77-7303-43bf-9be3-6a951b1104ec for this chassis. Oct 5 06:06:02 localhost ovn_controller[157556]: 2025-10-05T10:06:02Z|00133|binding|INFO|10ffbf77-7303-43bf-9be3-6a951b1104ec: Claiming unknown Oct 5 06:06:02 localhost NetworkManager[5970]: [1759658762.5616] manager: (tap10ffbf77-73): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost systemd-udevd[329727]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.571 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-e21c9222-9038-4bdc-a0c7-e542acc580eb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21c9222-9038-4bdc-a0c7-e542acc580eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72955e96-219e-46d2-af05-c9a4310eeeb6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=10ffbf77-7303-43bf-9be3-6a951b1104ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.573 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 10ffbf77-7303-43bf-9be3-6a951b1104ec in datapath e21c9222-9038-4bdc-a0c7-e542acc580eb bound to our chassis#033[00m Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.575 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e21c9222-9038-4bdc-a0c7-e542acc580eb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:02.576 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4ddbe4-aac6-4bc3-b65d-1fca358ea607]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost ovn_controller[157556]: 2025-10-05T10:06:02Z|00134|binding|INFO|Setting lport 10ffbf77-7303-43bf-9be3-6a951b1104ec ovn-installed in OVS Oct 5 06:06:02 localhost ovn_controller[157556]: 2025-10-05T10:06:02Z|00135|binding|INFO|Setting lport 10ffbf77-7303-43bf-9be3-6a951b1104ec up in Southbound Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost journal[237639]: ethtool ioctl error on tap10ffbf77-73: No such device Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:02 localhost nova_compute[297130]: 2025-10-05 10:06:02.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:03 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:03.145 2 INFO neutron.agent.securitygroups_rpc [None req-0a4a9087-fb9d-46a4-b94c-813813807afd fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:03.218 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:02Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3b79a8db-6298-4f25-ae38-f18f50bbcede, ip_allocation=immediate, mac_address=fa:16:3e:dd:ac:22, name=tempest-FloatingIPTestJSON-321814878, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:05:51Z, description=, dns_domain=, id=df7a2d03-b191-45b1-a183-2f789bc978a5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1788462047, port_security_enabled=True, project_id=e38d16b31a8e4ad18dabb5df8c62f1c6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38058, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1509, status=ACTIVE, subnets=['4568ffbd-8d88-4338-a876-8d448739b273'], tags=[], tenant_id=e38d16b31a8e4ad18dabb5df8c62f1c6, updated_at=2025-10-05T10:05:52Z, vlan_transparent=None, network_id=df7a2d03-b191-45b1-a183-2f789bc978a5, port_security_enabled=True, project_id=e38d16b31a8e4ad18dabb5df8c62f1c6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['2859cae9-8599-46b3-8005-27308b18fd8f'], standard_attr_id=1596, status=DOWN, tags=[], tenant_id=e38d16b31a8e4ad18dabb5df8c62f1c6, updated_at=2025-10-05T10:06:02Z on network df7a2d03-b191-45b1-a183-2f789bc978a5#033[00m Oct 5 06:06:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:06:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:06:03 localhost nova_compute[297130]: 2025-10-05 10:06:03.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:03.425 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:03Z, description=, device_id=c457f009-b977-4bc0-83e9-233e261f0f77, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2d5592bf-0a70-4910-8fd4-e302b57027da, ip_allocation=immediate, mac_address=fa:16:3e:b2:66:8f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1597, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:06:03Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:06:03 localhost dnsmasq[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/addn_hosts - 2 addresses Oct 5 06:06:03 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/host Oct 5 06:06:03 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/opts Oct 5 06:06:03 localhost podman[329839]: 2025-10-05 10:06:03.529384908 +0000 UTC m=+0.084524513 container kill 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:06:03 localhost podman[329805]: 2025-10-05 10:06:03.484315206 +0000 UTC m=+0.126295506 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:06:03 localhost podman[329824]: Oct 5 06:06:03 localhost podman[329824]: 2025-10-05 10:06:03.552447463 +0000 UTC m=+0.157866942 container create 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:03 localhost podman[329805]: 2025-10-05 10:06:03.563847272 +0000 UTC m=+0.205827502 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute) Oct 5 06:06:03 localhost podman[329806]: 2025-10-05 10:06:03.512896021 +0000 UTC m=+0.156995948 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:06:03 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:06:03 localhost systemd[1]: Started libpod-conmon-120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89.scope. Oct 5 06:06:03 localhost podman[329824]: 2025-10-05 10:06:03.503595438 +0000 UTC m=+0.109014977 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:03 localhost systemd[1]: Started libcrun container. Oct 5 06:06:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4366ff6e2f06846e18dfbd163e4cd7e21d737d9b58c729726756002a963db12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:03 localhost podman[329824]: 2025-10-05 10:06:03.621037152 +0000 UTC m=+0.226456631 container init 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:06:03 localhost podman[329824]: 2025-10-05 10:06:03.630168871 +0000 UTC m=+0.235588350 container start 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:06:03 localhost dnsmasq[329902]: started, version 2.85 cachesize 150 Oct 5 06:06:03 localhost dnsmasq[329902]: DNS service limited to local subnets Oct 5 06:06:03 localhost dnsmasq[329902]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:03 localhost dnsmasq[329902]: warning: no upstream servers configured Oct 5 06:06:03 localhost dnsmasq-dhcp[329902]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:03 localhost podman[329806]: 2025-10-05 10:06:03.642948427 +0000 UTC m=+0.287048364 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:06:03 localhost dnsmasq[329902]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/addn_hosts - 0 addresses Oct 5 06:06:03 localhost dnsmasq-dhcp[329902]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/host Oct 5 06:06:03 localhost dnsmasq-dhcp[329902]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/opts Oct 5 06:06:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:06:03 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:06:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e125 e125: 6 total, 6 up, 6 in Oct 5 06:06:03 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:06:03 localhost podman[329899]: 2025-10-05 10:06:03.70762347 +0000 UTC m=+0.073723370 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:03 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:03 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:03.858 271653 INFO neutron.agent.dhcp.agent [None req-1f1f2e5b-ab4d-460f-8b02-69d017774b89 - - - - - -] DHCP configuration for ports {'125f5b05-a2cf-47d5-81cd-fcd51b005ff0', '3b79a8db-6298-4f25-ae38-f18f50bbcede'} is completed#033[00m Oct 5 06:06:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:03 localhost dnsmasq[329902]: exiting on receipt of SIGTERM Oct 5 06:06:03 localhost podman[329944]: 2025-10-05 10:06:03.997105289 +0000 UTC m=+0.067543743 container kill 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:06:03 localhost systemd[1]: libpod-120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89.scope: Deactivated successfully. Oct 5 06:06:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:04.005 271653 INFO neutron.agent.dhcp.agent [None req-b38c4de8-9dc8-4f12-85a8-13464c359330 - - - - - -] DHCP configuration for ports {'2d5592bf-0a70-4910-8fd4-e302b57027da'} is completed#033[00m Oct 5 06:06:04 localhost podman[329957]: 2025-10-05 10:06:04.074301812 +0000 UTC m=+0.059656758 container died 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:04 localhost podman[329957]: 2025-10-05 10:06:04.106256559 +0000 UTC m=+0.091611475 container cleanup 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:06:04 localhost systemd[1]: libpod-conmon-120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89.scope: Deactivated successfully. Oct 5 06:06:04 localhost podman[329958]: 2025-10-05 10:06:04.151029093 +0000 UTC m=+0.129621677 container remove 120283f8e51585e3b390dde3f763a55c771e98008de011e83e0f98949728ff89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 06:06:04 localhost systemd[1]: tmp-crun.s4lhD0.mount: Deactivated successfully. Oct 5 06:06:04 localhost nova_compute[297130]: 2025-10-05 10:06:04.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:04 localhost dnsmasq[329493]: exiting on receipt of SIGTERM Oct 5 06:06:04 localhost systemd[1]: libpod-96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2.scope: Deactivated successfully. Oct 5 06:06:04 localhost podman[330001]: 2025-10-05 10:06:04.826030843 +0000 UTC m=+0.059159365 container kill 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:06:04 localhost podman[330022]: 2025-10-05 10:06:04.899858635 +0000 UTC m=+0.050927812 container died 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:06:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:04 localhost podman[330022]: 2025-10-05 10:06:04.944478715 +0000 UTC m=+0.095547862 container remove 96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ad0d8a8-9181-4949-b23f-21403c4b400d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:06:04 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:04.986 2 INFO neutron.agent.securitygroups_rpc [None req-6ec6a26e-e61d-4275-bc99-fc7b5b675d25 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:06:04 localhost systemd[1]: libpod-conmon-96b3156a08484b23a6b8c94a172a732cb8eaf08e2cec20475a26e7bdb98075b2.scope: Deactivated successfully. Oct 5 06:06:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:05.208 271653 INFO neutron.agent.dhcp.agent [None req-3215a051-e12e-40e4-89c6-6c0951d75489 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:06:05 localhost podman[330065]: 2025-10-05 10:06:05.417830459 +0000 UTC m=+0.083803624 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:06:05 localhost podman[330065]: 2025-10-05 10:06:05.431564851 +0000 UTC m=+0.097537966 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, config_id=edpm) Oct 5 06:06:05 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:06:05 localhost systemd[1]: var-lib-containers-storage-overlay-f0ca9207a939874399cbd2f6322206f9447e8e8512071508169e2554569f3a01-merged.mount: Deactivated successfully. Oct 5 06:06:05 localhost systemd[1]: run-netns-qdhcp\x2d0ad0d8a8\x2d9181\x2d4949\x2db23f\x2d21403c4b400d.mount: Deactivated successfully. Oct 5 06:06:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail Oct 5 06:06:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e126 e126: 6 total, 6 up, 6 in Oct 5 06:06:05 localhost podman[330109]: Oct 5 06:06:05 localhost podman[330109]: 2025-10-05 10:06:05.746857899 +0000 UTC m=+0.099648392 container create 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 5 06:06:05 localhost systemd[1]: Started libpod-conmon-74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e.scope. Oct 5 06:06:05 localhost podman[330109]: 2025-10-05 10:06:05.699380412 +0000 UTC m=+0.052170945 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:05.811 2 INFO neutron.agent.securitygroups_rpc [None req-6dc94aa3-8b4b-4c55-895e-1ee1eef07ff5 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:05 localhost systemd[1]: Started libcrun container. Oct 5 06:06:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1592c053d9c3adacc1a5090bb572ad249d1060f2aabb6a6ca5c22d50bc0290/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:05 localhost podman[330109]: 2025-10-05 10:06:05.843565162 +0000 UTC m=+0.196355635 container init 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:06:05 localhost podman[330109]: 2025-10-05 10:06:05.853186162 +0000 UTC m=+0.205976635 container start 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:05.861 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:05 localhost dnsmasq[330127]: started, version 2.85 cachesize 150 Oct 5 06:06:05 localhost dnsmasq[330127]: DNS service limited to local subnets Oct 5 06:06:05 localhost dnsmasq[330127]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:05 localhost dnsmasq[330127]: warning: no upstream servers configured Oct 5 06:06:05 localhost dnsmasq-dhcp[330127]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:06:05 localhost dnsmasq-dhcp[330127]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:05 localhost dnsmasq[330127]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/addn_hosts - 0 addresses Oct 5 06:06:05 localhost dnsmasq-dhcp[330127]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/host Oct 5 06:06:05 localhost dnsmasq-dhcp[330127]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/opts Oct 5 06:06:06 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:06.140 271653 INFO neutron.agent.dhcp.agent [None req-bf56779d-b3d5-4233-8bf2-b68df8214b16 - - - - - -] DHCP configuration for ports {'125f5b05-a2cf-47d5-81cd-fcd51b005ff0', '10ffbf77-7303-43bf-9be3-6a951b1104ec'} is completed#033[00m Oct 5 06:06:06 localhost dnsmasq[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/addn_hosts - 1 addresses Oct 5 06:06:06 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/host Oct 5 06:06:06 localhost podman[330145]: 2025-10-05 10:06:06.254974796 +0000 UTC m=+0.061555590 container kill 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:06 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/opts Oct 5 06:06:06 localhost nova_compute[297130]: 2025-10-05 10:06:06.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:06 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:06:06 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:06 localhost podman[330180]: 2025-10-05 10:06:06.463033727 +0000 UTC m=+0.073584376 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 06:06:06 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e127 e127: 6 total, 6 up, 6 in Oct 5 06:06:07 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:07.134 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:06Z, description=, device_id=ea92dfcf-de9d-46ba-8623-1ac8eb6dc8ec, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d10f53e4-0eb4-4457-b535-0f5c0949974e, ip_allocation=immediate, mac_address=fa:16:3e:22:88:20, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1614, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:06:06Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:06:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:07.191 2 INFO neutron.agent.securitygroups_rpc [None req-40270d90-3d16-4270-bc86-50cb382d9d08 dd7c8ef99d0f41198e47651e3f745b5f b19cb2ed6df34a0dad27155d804f6680 - - default default] Security group member updated ['587ef845-3f12-4f64-8d07-19635386ce1f']#033[00m Oct 5 06:06:07 localhost podman[330221]: 2025-10-05 10:06:07.363179782 +0000 UTC m=+0.060555312 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:06:07 localhost systemd[1]: tmp-crun.i2t5pN.mount: Deactivated successfully. Oct 5 06:06:07 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:06:07 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:07 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 8.7 KiB/s wr, 132 op/s Oct 5 06:06:07 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:07.666 271653 INFO neutron.agent.dhcp.agent [None req-efa2707c-6836-4872-a558-77b91d87d9fc - - - - - -] DHCP configuration for ports {'d10f53e4-0eb4-4457-b535-0f5c0949974e'} is completed#033[00m Oct 5 06:06:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e128 e128: 6 total, 6 up, 6 in Oct 5 06:06:07 localhost dnsmasq[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/addn_hosts - 0 addresses Oct 5 06:06:07 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/host Oct 5 06:06:07 localhost systemd[1]: tmp-crun.xOyc5e.mount: Deactivated successfully. Oct 5 06:06:07 localhost dnsmasq-dhcp[329204]: read /var/lib/neutron/dhcp/df7a2d03-b191-45b1-a183-2f789bc978a5/opts Oct 5 06:06:07 localhost podman[330257]: 2025-10-05 10:06:07.896614616 +0000 UTC m=+0.073136465 container kill 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:08 localhost ovn_controller[157556]: 2025-10-05T10:06:08Z|00136|binding|INFO|Releasing lport 12d465d7-0976-42ac-8ca7-99de5248b25c from this chassis (sb_readonly=0) Oct 5 06:06:08 localhost nova_compute[297130]: 2025-10-05 10:06:08.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:08 localhost kernel: device tap12d465d7-09 left promiscuous mode Oct 5 06:06:08 localhost ovn_controller[157556]: 2025-10-05T10:06:08Z|00137|binding|INFO|Setting lport 12d465d7-0976-42ac-8ca7-99de5248b25c down in Southbound Oct 5 06:06:08 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:08.128 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-df7a2d03-b191-45b1-a183-2f789bc978a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df7a2d03-b191-45b1-a183-2f789bc978a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e38d16b31a8e4ad18dabb5df8c62f1c6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06cb7391-dec3-4cca-ab02-ac3643e95def, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=12d465d7-0976-42ac-8ca7-99de5248b25c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:08 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:08.130 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 12d465d7-0976-42ac-8ca7-99de5248b25c in datapath df7a2d03-b191-45b1-a183-2f789bc978a5 unbound from our chassis#033[00m Oct 5 06:06:08 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:08.134 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df7a2d03-b191-45b1-a183-2f789bc978a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:08 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:08.134 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[d59c1748-f165-448a-9ed9-3c2476fd253c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:08 localhost nova_compute[297130]: 2025-10-05 10:06:08.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:08 localhost nova_compute[297130]: 2025-10-05 10:06:08.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 8.7 KiB/s wr, 132 op/s Oct 5 06:06:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e129 e129: 6 total, 6 up, 6 in Oct 5 06:06:09 localhost nova_compute[297130]: 2025-10-05 10:06:09.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost podman[330297]: 2025-10-05 10:06:09.832563575 +0000 UTC m=+0.060429060 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:06:09 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:06:09 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:09 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:09 localhost nova_compute[297130]: 2025-10-05 10:06:09.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:09.872 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:09 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:09.873 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:06:10 localhost systemd[1]: tmp-crun.eaHEN3.mount: Deactivated successfully. Oct 5 06:06:10 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:06:10 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:10 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:10 localhost podman[330336]: 2025-10-05 10:06:10.298567459 +0000 UTC m=+0.061721124 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:11 localhost podman[330375]: 2025-10-05 10:06:11.020951045 +0000 UTC m=+0.061346274 container kill 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:06:11 localhost dnsmasq[329204]: exiting on receipt of SIGTERM Oct 5 06:06:11 localhost systemd[1]: libpod-77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639.scope: Deactivated successfully. Oct 5 06:06:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:06:11 localhost podman[330389]: 2025-10-05 10:06:11.090634084 +0000 UTC m=+0.053897113 container died 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:11 localhost podman[330389]: 2025-10-05 10:06:11.178932278 +0000 UTC m=+0.142195267 container cleanup 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:11 localhost systemd[1]: libpod-conmon-77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639.scope: Deactivated successfully. Oct 5 06:06:11 localhost podman[330397]: 2025-10-05 10:06:11.144929096 +0000 UTC m=+0.088794059 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:06:11 localhost podman[330397]: 2025-10-05 10:06:11.232182922 +0000 UTC m=+0.176047855 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:06:11 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:06:11 localhost podman[330390]: 2025-10-05 10:06:11.253666604 +0000 UTC m=+0.209551532 container remove 77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df7a2d03-b191-45b1-a183-2f789bc978a5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:11.326 271653 INFO neutron.agent.dhcp.agent [None req-3f5608a8-c0fd-468d-8e54-71ff199d9735 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:11.327 271653 INFO neutron.agent.dhcp.agent [None req-3f5608a8-c0fd-468d-8e54-71ff199d9735 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:11.376 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:11Z, description=, device_id=c9e45d45-babe-4c99-b972-9c06d5c1238e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ddcceafb-804f-4c0a-a182-94fc68f5efb0, ip_allocation=immediate, mac_address=fa:16:3e:d4:e4:62, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1637, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:06:11Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:06:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:06:11 Oct 5 06:06:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:06:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:06:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'manila_data', 'backups', 'manila_metadata', 'images'] Oct 5 06:06:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:06:11 localhost nova_compute[297130]: 2025-10-05 10:06:11.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:11 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:06:11 localhost podman[330454]: 2025-10-05 10:06:11.61718052 +0000 UTC m=+0.069087284 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:06:11 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:11 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e130 e130: 6 total, 6 up, 6 in Oct 5 06:06:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 11 KiB/s wr, 161 op/s Oct 5 06:06:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:06:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:06:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:06:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:06:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:06:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00430020109226689 of space, bias 1.0, pg target 0.8586068180892891 quantized to 32 (current 32) Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:06:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:06:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:06:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:11.850 271653 INFO neutron.agent.dhcp.agent [None req-7b729dc0-dd41-48a2-ac2a-771ec097f671 - - - - - -] DHCP configuration for ports {'ddcceafb-804f-4c0a-a182-94fc68f5efb0'} is completed#033[00m Oct 5 06:06:11 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:11.875 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:06:12 localhost systemd[1]: var-lib-containers-storage-overlay-ab6b23e3b9f5746ed755ea771bbcc72105048f3c856f9710ae8fd6e8c2b27952-merged.mount: Deactivated successfully. Oct 5 06:06:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77659144a20a3342ccab130f4188286b5459b93702d10818082f1fab44b19639-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:12 localhost systemd[1]: run-netns-qdhcp\x2ddf7a2d03\x2db191\x2d45b1\x2da183\x2d2f789bc978a5.mount: Deactivated successfully. Oct 5 06:06:13 localhost nova_compute[297130]: 2025-10-05 10:06:13.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:13 localhost nova_compute[297130]: 2025-10-05 10:06:13.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 169 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 4.0 MiB/s wr, 77 op/s Oct 5 06:06:13 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:13.842 2 INFO neutron.agent.securitygroups_rpc [None req-a1540dd6-4210-49ff-b017-69a28b18671d fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:14 localhost nova_compute[297130]: 2025-10-05 10:06:14.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:14 localhost nova_compute[297130]: 2025-10-05 10:06:14.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:14.712 2 INFO neutron.agent.securitygroups_rpc [None req-c02d47ac-3ccc-4c3b-9efe-38cf6a9d2160 fdf4ee322daa40efa937f6a9d0372fdb e38d16b31a8e4ad18dabb5df8c62f1c6 - - default default] Security group member updated ['2859cae9-8599-46b3-8005-27308b18fd8f']#033[00m Oct 5 06:06:14 localhost nova_compute[297130]: 2025-10-05 10:06:14.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:14.918 2 INFO neutron.agent.securitygroups_rpc [None req-f53d150f-ff34-4d45-b035-cd4bf1baabd9 a4fad2a194fa4a66911e27f722075fa7 27e03170fdbf44268868a90d25e4e944 - - default default] Security group member updated ['14f00663-08a7-497a-b752-895d5ab0d915']#033[00m Oct 5 06:06:15 localhost nova_compute[297130]: 2025-10-05 10:06:15.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:15 localhost nova_compute[297130]: 2025-10-05 10:06:15.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:06:15 localhost nova_compute[297130]: 2025-10-05 10:06:15.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:06:15 localhost nova_compute[297130]: 2025-10-05 10:06:15.288 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:06:15 localhost podman[330493]: 2025-10-05 10:06:15.475229861 +0000 UTC m=+0.061141029 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:15 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:06:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 169 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 3.0 MiB/s wr, 58 op/s Oct 5 06:06:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e131 e131: 6 total, 6 up, 6 in Oct 5 06:06:16 localhost nova_compute[297130]: 2025-10-05 10:06:16.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e132 e132: 6 total, 6 up, 6 in Oct 5 06:06:16 localhost openstack_network_exporter[250246]: ERROR 10:06:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:16 localhost openstack_network_exporter[250246]: ERROR 10:06:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:16 localhost openstack_network_exporter[250246]: ERROR 10:06:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:06:16 localhost openstack_network_exporter[250246]: ERROR 10:06:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:06:16 localhost openstack_network_exporter[250246]: Oct 5 06:06:16 localhost openstack_network_exporter[250246]: ERROR 10:06:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:06:16 localhost openstack_network_exporter[250246]: Oct 5 06:06:16 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:16.979 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:16Z, description=, device_id=4de48a63-c4cc-4491-90f4-ed5239f4dbe2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=818448ae-2323-485d-8d5c-fea9e94c223f, ip_allocation=immediate, mac_address=fa:16:3e:8c:5e:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1652, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:06:16Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:06:17 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:06:17 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:17 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:17 localhost podman[330532]: 2025-10-05 10:06:17.193050836 +0000 UTC m=+0.061053867 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:17 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:17.445 271653 INFO neutron.agent.dhcp.agent [None req-c4eec390-f49f-42a4-b5b7-a6ec96b25317 - - - - - -] DHCP configuration for ports {'818448ae-2323-485d-8d5c-fea9e94c223f'} is completed#033[00m Oct 5 06:06:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 19 MiB/s wr, 121 op/s Oct 5 06:06:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e133 e133: 6 total, 6 up, 6 in Oct 5 06:06:18 localhost nova_compute[297130]: 2025-10-05 10:06:18.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:06:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e134 e134: 6 total, 6 up, 6 in Oct 5 06:06:18 localhost podman[330554]: 2025-10-05 10:06:18.934805948 +0000 UTC m=+0.088546842 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:06:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:18 localhost podman[330554]: 2025-10-05 10:06:18.975217894 +0000 UTC m=+0.128958778 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:06:18 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:06:18 localhost podman[330553]: 2025-10-05 10:06:18.991478695 +0000 UTC m=+0.149400532 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible) Oct 5 06:06:19 localhost podman[330553]: 2025-10-05 10:06:19.030254996 +0000 UTC m=+0.188176863 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 06:06:19 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.304 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.304 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.305 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.305 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.305 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:06:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 22 MiB/s wr, 67 op/s Oct 5 06:06:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:06:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2005359666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.754 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:19 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:06:19 localhost podman[330633]: 2025-10-05 10:06:19.961221026 +0000 UTC m=+0.065679731 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:06:19 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:19 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.976 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.978 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11580MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.978 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:06:19 localhost nova_compute[297130]: 2025-10-05 10:06:19.979 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.263 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.264 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.280 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:06:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:20.404 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:06:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:20.405 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:06:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:20.406 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:06:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:06:20 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/614437216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.696 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.703 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.729 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.758 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:06:20 localhost nova_compute[297130]: 2025-10-05 10:06:20.759 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:06:21 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:21.388 2 INFO neutron.agent.securitygroups_rpc [None req-ac8f0eac-2a70-457c-8a61-6ee9d997209b b817219f01e3454e8694e283e92fc44c ea1a94bd61a440f3957671694183ce08 - - default default] Security group member updated ['d6c25099-34f5-417b-b95b-a4264a8e3587']#033[00m Oct 5 06:06:21 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:06:21 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:21 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:21 localhost podman[330692]: 2025-10-05 10:06:21.508379315 +0000 UTC m=+0.061890920 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:06:21 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:21.574 2 INFO neutron.agent.securitygroups_rpc [None req-4cb6cc17-e2d9-4ef7-b7c8-bf598d55a89a a4fad2a194fa4a66911e27f722075fa7 27e03170fdbf44268868a90d25e4e944 - - default default] Security group member updated ['14f00663-08a7-497a-b752-895d5ab0d915']#033[00m Oct 5 06:06:21 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:21.646 271653 INFO neutron.agent.dhcp.agent [None req-84d21ad5-d067-4300-962a-bac817104413 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:21Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=982010c0-129f-468b-86f5-ccce7cb7cb24, ip_allocation=immediate, mac_address=fa:16:3e:0b:4d:08, name=tempest-RoutersAdminNegativeIpV6Test-1314280216, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=True, project_id=ea1a94bd61a440f3957671694183ce08, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d6c25099-34f5-417b-b95b-a4264a8e3587'], standard_attr_id=1675, status=DOWN, tags=[], tenant_id=ea1a94bd61a440f3957671694183ce08, updated_at=2025-10-05T10:06:21Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:06:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 15 MiB/s wr, 46 op/s Oct 5 06:06:21 localhost nova_compute[297130]: 2025-10-05 10:06:21.759 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:21 localhost nova_compute[297130]: 2025-10-05 10:06:21.760 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:06:21 localhost nova_compute[297130]: 2025-10-05 10:06:21.760 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:06:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:06:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:06:21 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:06:21 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:21 localhost podman[330729]: 2025-10-05 10:06:21.888801138 +0000 UTC m=+0.068762284 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:06:21 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:21 localhost systemd[1]: tmp-crun.nH4Sgb.mount: Deactivated successfully. Oct 5 06:06:21 localhost podman[330737]: 2025-10-05 10:06:21.935066293 +0000 UTC m=+0.087408070 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001) Oct 5 06:06:22 localhost podman[330738]: 2025-10-05 10:06:22.003379805 +0000 UTC m=+0.150660266 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:06:22 localhost podman[330737]: 2025-10-05 10:06:22.021962539 +0000 UTC m=+0.174304306 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:06:22 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:06:22 localhost podman[330738]: 2025-10-05 10:06:22.109360549 +0000 UTC m=+0.256640970 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:22 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:22.110 271653 INFO neutron.agent.dhcp.agent [None req-f0f6ef13-7e35-4002-8bd7-b0eb6962bc67 - - - - - -] DHCP configuration for ports {'982010c0-129f-468b-86f5-ccce7cb7cb24'} is completed#033[00m Oct 5 06:06:22 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:06:23 localhost nova_compute[297130]: 2025-10-05 10:06:23.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 13 MiB/s wr, 99 op/s Oct 5 06:06:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:24.144 2 INFO neutron.agent.securitygroups_rpc [None req-42adb9bd-1c44-4c47-bb81-feb4a013376d b817219f01e3454e8694e283e92fc44c ea1a94bd61a440f3957671694183ce08 - - default default] Security group member updated ['d6c25099-34f5-417b-b95b-a4264a8e3587']#033[00m Oct 5 06:06:24 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:06:24 localhost podman[330812]: 2025-10-05 10:06:24.379109817 +0000 UTC m=+0.071371906 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:24 localhost systemd[1]: tmp-crun.Lv2elq.mount: Deactivated successfully. Oct 5 06:06:24 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:24 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e135 e135: 6 total, 6 up, 6 in Oct 5 06:06:24 localhost nova_compute[297130]: 2025-10-05 10:06:24.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 772 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 54 op/s Oct 5 06:06:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e136 e136: 6 total, 6 up, 6 in Oct 5 06:06:26 localhost podman[248157]: time="2025-10-05T10:06:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:06:26 localhost podman[248157]: @ - - [05/Oct/2025:10:06:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148237 "" "Go-http-client/1.1" Oct 5 06:06:26 localhost podman[248157]: @ - - [05/Oct/2025:10:06:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19791 "" "Go-http-client/1.1" Oct 5 06:06:26 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:26.431 2 INFO neutron.agent.securitygroups_rpc [None req-5318ebe2-2095-4bac-986a-b46f6b98b717 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e137 e137: 6 total, 6 up, 6 in Oct 5 06:06:27 localhost dnsmasq[330127]: exiting on receipt of SIGTERM Oct 5 06:06:27 localhost systemd[1]: tmp-crun.ZWxTap.mount: Deactivated successfully. Oct 5 06:06:27 localhost podman[330893]: 2025-10-05 10:06:27.143007713 +0000 UTC m=+0.065818977 container kill 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:06:27 localhost systemd[1]: libpod-74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e.scope: Deactivated successfully. Oct 5 06:06:27 localhost podman[330915]: 2025-10-05 10:06:27.20527042 +0000 UTC m=+0.050957502 container died 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:27 localhost podman[330915]: 2025-10-05 10:06:27.280055718 +0000 UTC m=+0.125742790 container cleanup 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:27 localhost systemd[1]: libpod-conmon-74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e.scope: Deactivated successfully. Oct 5 06:06:27 localhost podman[330922]: 2025-10-05 10:06:27.302198859 +0000 UTC m=+0.133544462 container remove 74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:06:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:06:27 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:06:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:06:27 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:06:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:06:27 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 9a4c05fe-0e99-47b5-8704-825fa8598ee6 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:06:27 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 9a4c05fe-0e99-47b5-8704-825fa8598ee6 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:06:27 localhost ceph-mgr[301363]: [progress INFO root] Completed event 9a4c05fe-0e99-47b5-8704-825fa8598ee6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:06:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:06:27 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:06:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 4.7 KiB/s wr, 78 op/s Oct 5 06:06:27 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:06:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:06:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e138 e138: 6 total, 6 up, 6 in Oct 5 06:06:28 localhost podman[331025]: Oct 5 06:06:28 localhost podman[331025]: 2025-10-05 10:06:28.111918202 +0000 UTC m=+0.087523973 container create 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:06:28 localhost systemd[1]: var-lib-containers-storage-overlay-cf1592c053d9c3adacc1a5090bb572ad249d1060f2aabb6a6ca5c22d50bc0290-merged.mount: Deactivated successfully. Oct 5 06:06:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74ddd21c34de03a60a2cf024b72429885cd11a0051f264dc89d99eec53bd0f3e-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:28 localhost systemd[1]: Started libpod-conmon-6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3.scope. Oct 5 06:06:28 localhost podman[331025]: 2025-10-05 10:06:28.06940219 +0000 UTC m=+0.045007981 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:28 localhost systemd[1]: Started libcrun container. Oct 5 06:06:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2914cc9629db366b1b69396451654c83897b83200f28f4bee39934b95829f6b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:28 localhost podman[331025]: 2025-10-05 10:06:28.188591151 +0000 UTC m=+0.164196912 container init 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:28 localhost podman[331025]: 2025-10-05 10:06:28.199507257 +0000 UTC m=+0.175113018 container start 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:06:28 localhost dnsmasq[331043]: started, version 2.85 cachesize 150 Oct 5 06:06:28 localhost dnsmasq[331043]: DNS service limited to local subnets Oct 5 06:06:28 localhost dnsmasq[331043]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:28 localhost dnsmasq[331043]: warning: no upstream servers configured Oct 5 06:06:28 localhost dnsmasq-dhcp[331043]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:28 localhost dnsmasq[331043]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/addn_hosts - 0 addresses Oct 5 06:06:28 localhost dnsmasq-dhcp[331043]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/host Oct 5 06:06:28 localhost dnsmasq-dhcp[331043]: read /var/lib/neutron/dhcp/e21c9222-9038-4bdc-a0c7-e542acc580eb/opts Oct 5 06:06:28 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:28.431 271653 INFO neutron.agent.dhcp.agent [None req-de555bf2-9f55-4922-b256-7a5b216c7f1e - - - - - -] DHCP configuration for ports {'125f5b05-a2cf-47d5-81cd-fcd51b005ff0', '10ffbf77-7303-43bf-9be3-6a951b1104ec'} is completed#033[00m Oct 5 06:06:28 localhost nova_compute[297130]: 2025-10-05 10:06:28.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:28 localhost dnsmasq[331043]: exiting on receipt of SIGTERM Oct 5 06:06:28 localhost podman[331061]: 2025-10-05 10:06:28.570974609 +0000 UTC m=+0.110108247 container kill 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:06:28 localhost systemd[1]: libpod-6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3.scope: Deactivated successfully. Oct 5 06:06:28 localhost podman[331075]: 2025-10-05 10:06:28.636928606 +0000 UTC m=+0.055600458 container died 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:28 localhost podman[331075]: 2025-10-05 10:06:28.668358618 +0000 UTC m=+0.087030440 container cleanup 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:06:28 localhost systemd[1]: libpod-conmon-6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3.scope: Deactivated successfully. Oct 5 06:06:28 localhost podman[331077]: 2025-10-05 10:06:28.718484958 +0000 UTC m=+0.126366457 container remove 6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e21c9222-9038-4bdc-a0c7-e542acc580eb, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:06:28 localhost nova_compute[297130]: 2025-10-05 10:06:28.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:28 localhost kernel: device tap10ffbf77-73 left promiscuous mode Oct 5 06:06:28 localhost ovn_controller[157556]: 2025-10-05T10:06:28Z|00138|binding|INFO|Releasing lport 10ffbf77-7303-43bf-9be3-6a951b1104ec from this chassis (sb_readonly=0) Oct 5 06:06:28 localhost ovn_controller[157556]: 2025-10-05T10:06:28Z|00139|binding|INFO|Setting lport 10ffbf77-7303-43bf-9be3-6a951b1104ec down in Southbound Oct 5 06:06:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:28.742 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-e21c9222-9038-4bdc-a0c7-e542acc580eb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21c9222-9038-4bdc-a0c7-e542acc580eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f233ce96b74d72b19666e7a11a530a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72955e96-219e-46d2-af05-c9a4310eeeb6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=10ffbf77-7303-43bf-9be3-6a951b1104ec) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:28.744 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 10ffbf77-7303-43bf-9be3-6a951b1104ec in datapath e21c9222-9038-4bdc-a0c7-e542acc580eb unbound from our chassis#033[00m Oct 5 06:06:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:28.746 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21c9222-9038-4bdc-a0c7-e542acc580eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:06:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:28.747 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb979cd-8b71-4402-aed3-d7d1488344cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:28 localhost nova_compute[297130]: 2025-10-05 10:06:28.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:28 localhost nova_compute[297130]: 2025-10-05 10:06:28.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:29 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:29.131 271653 INFO neutron.agent.dhcp.agent [None req-31bbb60e-b33c-450a-b112-072dad9a918a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:29 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:29.131 271653 INFO neutron.agent.dhcp.agent [None req-31bbb60e-b33c-450a-b112-072dad9a918a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:29 localhost systemd[1]: var-lib-containers-storage-overlay-2914cc9629db366b1b69396451654c83897b83200f28f4bee39934b95829f6b4-merged.mount: Deactivated successfully. Oct 5 06:06:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c3a6a7b3616350b3f6b78f5e730b66bcc8a481a845a6844629d0093fb197be3-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:29 localhost systemd[1]: run-netns-qdhcp\x2de21c9222\x2d9038\x2d4bdc\x2da0c7\x2de542acc580eb.mount: Deactivated successfully. Oct 5 06:06:29 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:29.519 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 5.5 KiB/s rd, 1.2 KiB/s wr, 9 op/s Oct 5 06:06:29 localhost nova_compute[297130]: 2025-10-05 10:06:29.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:29 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:29.871 2 INFO neutron.agent.securitygroups_rpc [None req-e54b65f2-7173-4862-a9ae-d94123481585 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:29 localhost nova_compute[297130]: 2025-10-05 10:06:29.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:30 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:30.090 2 INFO neutron.agent.securitygroups_rpc [None req-e54b65f2-7173-4862-a9ae-d94123481585 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:30 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:30.724 2 INFO neutron.agent.securitygroups_rpc [None req-64668cd4-df0c-434e-b55d-562cf94e8983 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e139 e139: 6 total, 6 up, 6 in Oct 5 06:06:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 756 MiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 1.0 KiB/s wr, 7 op/s Oct 5 06:06:31 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:06:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:06:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:32.352 2 INFO neutron.agent.securitygroups_rpc [None req-5d192d4f-b77f-4667-88b6-26c01da73471 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:32 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:32.514 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:32 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:06:33 localhost nova_compute[297130]: 2025-10-05 10:06:33.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 2.7 KiB/s wr, 66 op/s Oct 5 06:06:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e140 e140: 6 total, 6 up, 6 in Oct 5 06:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:06:33 localhost podman[331105]: 2025-10-05 10:06:33.926432989 +0000 UTC m=+0.088864530 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:06:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:33 localhost podman[331105]: 2025-10-05 10:06:33.961922241 +0000 UTC m=+0.124353772 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:06:33 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:06:33 localhost podman[331104]: 2025-10-05 10:06:33.988458721 +0000 UTC m=+0.153265106 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:34 localhost podman[331104]: 2025-10-05 10:06:34.002226054 +0000 UTC m=+0.167032499 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:06:34 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:06:34 localhost nova_compute[297130]: 2025-10-05 10:06:34.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 1.7 KiB/s wr, 53 op/s Oct 5 06:06:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:06:35 localhost podman[331145]: 2025-10-05 10:06:35.903195424 +0000 UTC m=+0.074255285 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible) Oct 5 06:06:35 localhost podman[331145]: 2025-10-05 10:06:35.94621336 +0000 UTC m=+0.117273161 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:06:35 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:06:37 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:37.178 2 INFO neutron.agent.securitygroups_rpc [None req-caada808-5cf9-4ee6-be88-f30f7fecba19 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:37.643 271653 INFO neutron.agent.linux.ip_lib [None req-c0ba642c-3d71-4811-8051-4f1b3f125079 - - - - - -] Device tap9202269c-7b cannot be used as it has no MAC address#033[00m Oct 5 06:06:37 localhost nova_compute[297130]: 2025-10-05 10:06:37.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 2.7 KiB/s wr, 86 op/s Oct 5 06:06:37 localhost kernel: device tap9202269c-7b entered promiscuous mode Oct 5 06:06:37 localhost ovn_controller[157556]: 2025-10-05T10:06:37Z|00140|binding|INFO|Claiming lport 9202269c-7b38-4952-853b-1cb6523adeff for this chassis. Oct 5 06:06:37 localhost ovn_controller[157556]: 2025-10-05T10:06:37Z|00141|binding|INFO|9202269c-7b38-4952-853b-1cb6523adeff: Claiming unknown Oct 5 06:06:37 localhost NetworkManager[5970]: [1759658797.6753] manager: (tap9202269c-7b): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Oct 5 06:06:37 localhost nova_compute[297130]: 2025-10-05 10:06:37.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:37 localhost systemd-udevd[331175]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost nova_compute[297130]: 2025-10-05 10:06:37.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:37 localhost ovn_controller[157556]: 2025-10-05T10:06:37Z|00142|binding|INFO|Setting lport 9202269c-7b38-4952-853b-1cb6523adeff ovn-installed in OVS Oct 5 06:06:37 localhost ovn_controller[157556]: 2025-10-05T10:06:37Z|00143|binding|INFO|Setting lport 9202269c-7b38-4952-853b-1cb6523adeff up in Southbound Oct 5 06:06:37 localhost nova_compute[297130]: 2025-10-05 10:06:37.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:37 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:37.713 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-ee88a216-df6e-45f1-9123-2d5675c416c1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee88a216-df6e-45f1-9123-2d5675c416c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d164b45ed944867815970d9328a76bf', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7cf9eb1a-8031-4143-862c-6beb7f97d627, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9202269c-7b38-4952-853b-1cb6523adeff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:37.717 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 9202269c-7b38-4952-853b-1cb6523adeff in datapath ee88a216-df6e-45f1-9123-2d5675c416c1 bound to our chassis#033[00m Oct 5 06:06:37 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:37.719 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ee88a216-df6e-45f1-9123-2d5675c416c1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:37 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:37.720 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7cf6c4-b4ea-40cf-a5fd-aaf80440483d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost journal[237639]: ethtool ioctl error on tap9202269c-7b: No such device Oct 5 06:06:37 localhost nova_compute[297130]: 2025-10-05 10:06:37.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:37 localhost nova_compute[297130]: 2025-10-05 10:06:37.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:38 localhost podman[331245]: Oct 5 06:06:38 localhost podman[331245]: 2025-10-05 10:06:38.551473075 +0000 UTC m=+0.063805911 container create 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:38 localhost systemd[1]: Started libpod-conmon-90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75.scope. Oct 5 06:06:38 localhost systemd[1]: tmp-crun.9HReH2.mount: Deactivated successfully. Oct 5 06:06:38 localhost systemd[1]: Started libcrun container. Oct 5 06:06:38 localhost podman[331245]: 2025-10-05 10:06:38.517611727 +0000 UTC m=+0.029944543 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:38 localhost nova_compute[297130]: 2025-10-05 10:06:38.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448ea162da8c5de60bec0b33ca4ac8205df9a10bcd2f033f293a1cf50d5b4b25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:38 localhost podman[331245]: 2025-10-05 10:06:38.665679861 +0000 UTC m=+0.178012697 container init 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:06:38 localhost podman[331245]: 2025-10-05 10:06:38.675634752 +0000 UTC m=+0.187967588 container start 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:06:38 localhost dnsmasq[331263]: started, version 2.85 cachesize 150 Oct 5 06:06:38 localhost dnsmasq[331263]: DNS service limited to local subnets Oct 5 06:06:38 localhost dnsmasq[331263]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:38 localhost dnsmasq[331263]: warning: no upstream servers configured Oct 5 06:06:38 localhost dnsmasq-dhcp[331263]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:06:38 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 0 addresses Oct 5 06:06:38 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:38 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:38 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:38.725 271653 INFO neutron.agent.dhcp.agent [None req-c0ba642c-3d71-4811-8051-4f1b3f125079 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:36Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f07c97bc-46bb-410e-9a4c-57ab7ec2ea78, ip_allocation=immediate, mac_address=fa:16:3e:6f:b8:48, name=tempest-AllowedAddressPairIpV6TestJSON-788839146, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:33Z, description=, dns_domain=, id=ee88a216-df6e-45f1-9123-2d5675c416c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1649939855, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1753, status=ACTIVE, subnets=['96e74839-f525-486b-b759-c2b10bb5ab0b'], tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:35Z, vlan_transparent=None, network_id=ee88a216-df6e-45f1-9123-2d5675c416c1, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0d3758d3-10cf-4853-9555-ec79169270af'], standard_attr_id=1773, status=DOWN, tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:36Z on network ee88a216-df6e-45f1-9123-2d5675c416c1#033[00m Oct 5 06:06:38 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:38.845 271653 INFO neutron.agent.dhcp.agent [None req-1b78b4bd-7fb5-4639-9884-07f59ccf559e - - - - - -] DHCP configuration for ports {'99ae3f2b-6693-4cfd-b5d4-1b087e05013a'} is completed#033[00m Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:06:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:06:38 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 1 addresses Oct 5 06:06:38 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:38 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:38 localhost podman[331282]: 2025-10-05 10:06:38.897804795 +0000 UTC m=+0.058063576 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:39 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:39.092 271653 INFO neutron.agent.dhcp.agent [None req-3b7a4dbe-1d49-4c89-8e4e-483cb9409d3c - - - - - -] DHCP configuration for ports {'f07c97bc-46bb-410e-9a4c-57ab7ec2ea78'} is completed#033[00m Oct 5 06:06:39 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:39.145 2 INFO neutron.agent.securitygroups_rpc [None req-c4d1c914-9f42-4764-bf0d-d13f6d9dcb90 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:39 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:39.285 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:38Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0c58e8c5-afb1-459b-99f1-972043c28996, ip_allocation=immediate, mac_address=fa:16:3e:d9:66:8d, name=tempest-AllowedAddressPairIpV6TestJSON-362049307, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:33Z, description=, dns_domain=, id=ee88a216-df6e-45f1-9123-2d5675c416c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1649939855, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1753, status=ACTIVE, subnets=['96e74839-f525-486b-b759-c2b10bb5ab0b'], tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:35Z, vlan_transparent=None, network_id=ee88a216-df6e-45f1-9123-2d5675c416c1, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0d3758d3-10cf-4853-9555-ec79169270af'], standard_attr_id=1776, status=DOWN, tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:38Z on network ee88a216-df6e-45f1-9123-2d5675c416c1#033[00m Oct 5 06:06:39 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 2 addresses Oct 5 06:06:39 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:39 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:39 localhost podman[331322]: 2025-10-05 10:06:39.473592155 +0000 UTC m=+0.059460253 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:06:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 2.7 KiB/s wr, 86 op/s Oct 5 06:06:39 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:39.768 271653 INFO neutron.agent.dhcp.agent [None req-bd4a796b-a141-450c-820e-555137ff7a1f - - - - - -] DHCP configuration for ports {'0c58e8c5-afb1-459b-99f1-972043c28996'} is completed#033[00m Oct 5 06:06:39 localhost nova_compute[297130]: 2025-10-05 10:06:39.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:40 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:40.229 2 INFO neutron.agent.securitygroups_rpc [None req-683e8cfe-fbca-47cb-9979-a4649326b1aa 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:40 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 1 addresses Oct 5 06:06:40 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:40 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:40 localhost podman[331361]: 2025-10-05 10:06:40.424703913 +0000 UTC m=+0.052467304 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:06:41 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:41.439 2 INFO neutron.agent.securitygroups_rpc [None req-e675479c-0810-4921-b4fc-eddb22a96c71 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e141 e141: 6 total, 6 up, 6 in Oct 5 06:06:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:06:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:06:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:06:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:06:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 33 op/s Oct 5 06:06:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:06:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:06:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:06:41 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:41.871 2 INFO neutron.agent.securitygroups_rpc [None req-c30ba9e1-2900-4469-8322-eb8944f17634 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:41 localhost podman[331382]: 2025-10-05 10:06:41.912589134 +0000 UTC m=+0.079914658 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:06:41 localhost podman[331382]: 2025-10-05 10:06:41.921146216 +0000 UTC m=+0.088471750 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:41 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:06:42 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:42.102 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c70356f5-37d7-42dc-b069-3638c4381952, ip_allocation=immediate, mac_address=fa:16:3e:8e:b8:57, name=tempest-AllowedAddressPairIpV6TestJSON-1386983110, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:33Z, description=, dns_domain=, id=ee88a216-df6e-45f1-9123-2d5675c416c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1649939855, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1753, status=ACTIVE, subnets=['96e74839-f525-486b-b759-c2b10bb5ab0b'], tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:35Z, vlan_transparent=None, network_id=ee88a216-df6e-45f1-9123-2d5675c416c1, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0d3758d3-10cf-4853-9555-ec79169270af'], standard_attr_id=1780, status=DOWN, tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:41Z on network ee88a216-df6e-45f1-9123-2d5675c416c1#033[00m Oct 5 06:06:42 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 2 addresses Oct 5 06:06:42 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:42 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:42 localhost podman[331418]: 2025-10-05 10:06:42.301738455 +0000 UTC m=+0.064691295 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:06:42 localhost systemd[1]: tmp-crun.XLjWU1.mount: Deactivated successfully. Oct 5 06:06:42 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:42.570 271653 INFO neutron.agent.dhcp.agent [None req-187d6122-10b8-44c4-a0e7-2c5cde28f492 - - - - - -] DHCP configuration for ports {'c70356f5-37d7-42dc-b069-3638c4381952'} is completed#033[00m Oct 5 06:06:43 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:43.339 2 INFO neutron.agent.securitygroups_rpc [None req-b207964d-c473-4b5c-80bf-217bcabe0d8d 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:43 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:43.384 2 INFO neutron.agent.securitygroups_rpc [None req-c80266e1-ce8b-42cd-8ef9-5134d4e4e2d9 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:43 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 1 addresses Oct 5 06:06:43 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:43 localhost podman[331455]: 2025-10-05 10:06:43.543180874 +0000 UTC m=+0.058016165 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:43 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s Oct 5 06:06:43 localhost nova_compute[297130]: 2025-10-05 10:06:43.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:43 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:43.882 2 INFO neutron.agent.securitygroups_rpc [None req-cee95e86-30a7-4cf2-9265-4836280d6987 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:43.913 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b7313af7-43cd-4fef-a89d-5b2246e24934, ip_allocation=immediate, mac_address=fa:16:3e:1d:8e:ab, name=tempest-AllowedAddressPairIpV6TestJSON-1261985002, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:33Z, description=, dns_domain=, id=ee88a216-df6e-45f1-9123-2d5675c416c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1649939855, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1753, status=ACTIVE, subnets=['96e74839-f525-486b-b759-c2b10bb5ab0b'], tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:35Z, vlan_transparent=None, network_id=ee88a216-df6e-45f1-9123-2d5675c416c1, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0d3758d3-10cf-4853-9555-ec79169270af'], standard_attr_id=1787, status=DOWN, tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:43Z on network ee88a216-df6e-45f1-9123-2d5675c416c1#033[00m Oct 5 06:06:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:44 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 2 addresses Oct 5 06:06:44 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:44 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:44 localhost podman[331493]: 2025-10-05 10:06:44.064071976 +0000 UTC m=+0.053537242 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:06:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:44.357 271653 INFO neutron.agent.dhcp.agent [None req-615e0ad3-2ef3-4d69-9b37-300a7670f1e9 - - - - - -] DHCP configuration for ports {'b7313af7-43cd-4fef-a89d-5b2246e24934'} is completed#033[00m Oct 5 06:06:44 localhost nova_compute[297130]: 2025-10-05 10:06:44.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:45.019 2 INFO neutron.agent.securitygroups_rpc [None req-975a16a1-2899-4f2c-9683-7c6f4127db83 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:45 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 1 addresses Oct 5 06:06:45 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:45 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:45 localhost podman[331531]: 2025-10-05 10:06:45.242646501 +0000 UTC m=+0.059332840 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:06:45 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:45.409 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:45Z, description=, device_id=df8551a3-76b3-47be-84e3-69ca68d21797, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=82c8edec-fbed-435e-92dd-31bb9ef08a38, ip_allocation=immediate, mac_address=fa:16:3e:55:dd:49, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1800, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:06:45Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:06:45 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:06:45 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:45 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:45 localhost podman[331566]: 2025-10-05 10:06:45.646578882 +0000 UTC m=+0.077170534 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 29 op/s Oct 5 06:06:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:45.773 2 INFO neutron.agent.securitygroups_rpc [None req-af9fe55a-377a-42e5-b25a-66e050161e7a 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:45 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:45.832 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:45Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7ff519f0-f402-4c9c-a30f-bbfd6848de90, ip_allocation=immediate, mac_address=fa:16:3e:83:6a:89, name=tempest-AllowedAddressPairIpV6TestJSON-408047875, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:33Z, description=, dns_domain=, id=ee88a216-df6e-45f1-9123-2d5675c416c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1649939855, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1753, status=ACTIVE, subnets=['96e74839-f525-486b-b759-c2b10bb5ab0b'], tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:35Z, vlan_transparent=None, network_id=ee88a216-df6e-45f1-9123-2d5675c416c1, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0d3758d3-10cf-4853-9555-ec79169270af'], standard_attr_id=1801, status=DOWN, tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:45Z on network ee88a216-df6e-45f1-9123-2d5675c416c1#033[00m Oct 5 06:06:45 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:45.956 271653 INFO neutron.agent.dhcp.agent [None req-ebb9e2da-1a65-4461-bcbe-a48d103f7a64 - - - - - -] DHCP configuration for ports {'82c8edec-fbed-435e-92dd-31bb9ef08a38'} is completed#033[00m Oct 5 06:06:46 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 2 addresses Oct 5 06:06:46 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:46 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:46 localhost podman[331606]: 2025-10-05 10:06:46.031712315 +0000 UTC m=+0.057061319 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:46.319 271653 INFO neutron.agent.dhcp.agent [None req-98a12b28-8a4f-4464-9a2c-f7f908ec6d4c - - - - - -] DHCP configuration for ports {'7ff519f0-f402-4c9c-a30f-bbfd6848de90'} is completed#033[00m Oct 5 06:06:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:46.444 2 INFO neutron.agent.securitygroups_rpc [None req-54a9b384-5b39-4cb5-8dd8-7b8811d48589 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:46.600 2 INFO neutron.agent.securitygroups_rpc [None req-54a9b384-5b39-4cb5-8dd8-7b8811d48589 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:46 localhost openstack_network_exporter[250246]: ERROR 10:06:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:46 localhost openstack_network_exporter[250246]: ERROR 10:06:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:06:46 localhost openstack_network_exporter[250246]: ERROR 10:06:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:06:46 localhost openstack_network_exporter[250246]: ERROR 10:06:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:06:46 localhost openstack_network_exporter[250246]: Oct 5 06:06:46 localhost openstack_network_exporter[250246]: ERROR 10:06:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:06:46 localhost openstack_network_exporter[250246]: Oct 5 06:06:47 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:47.106 2 INFO neutron.agent.securitygroups_rpc [None req-77d35cad-696a-43a3-aeb7-2ddda9965d74 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:47 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:47.186 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:06:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3605a54a-8b20-4480-b71d-afb7ea3f162e, ip_allocation=immediate, mac_address=fa:16:3e:c9:69:e3, name=tempest-AllowedAddressPairIpV6TestJSON-1440060054, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:33Z, description=, dns_domain=, id=ee88a216-df6e-45f1-9123-2d5675c416c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1649939855, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1753, status=ACTIVE, subnets=['96e74839-f525-486b-b759-c2b10bb5ab0b'], tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:35Z, vlan_transparent=None, network_id=ee88a216-df6e-45f1-9123-2d5675c416c1, port_security_enabled=True, project_id=7d164b45ed944867815970d9328a76bf, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0d3758d3-10cf-4853-9555-ec79169270af'], standard_attr_id=1808, status=DOWN, tags=[], tenant_id=7d164b45ed944867815970d9328a76bf, updated_at=2025-10-05T10:06:46Z on network ee88a216-df6e-45f1-9123-2d5675c416c1#033[00m Oct 5 06:06:47 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 3 addresses Oct 5 06:06:47 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:47 localhost podman[331642]: 2025-10-05 10:06:47.357323525 +0000 UTC m=+0.044313092 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:47 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:47 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:47.557 2 INFO neutron.agent.securitygroups_rpc [None req-cd6b925e-5b31-44b2-bb66-2a94b469b1f8 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:47 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:47.562 271653 INFO neutron.agent.dhcp.agent [None req-a7c0088e-6b85-4eee-96eb-ae7c7f8a1ae8 - - - - - -] DHCP configuration for ports {'3605a54a-8b20-4480-b71d-afb7ea3f162e'} is completed#033[00m Oct 5 06:06:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e142 e142: 6 total, 6 up, 6 in Oct 5 06:06:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 3.4 KiB/s rd, 818 B/s wr, 5 op/s Oct 5 06:06:48 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:48.414 2 INFO neutron.agent.securitygroups_rpc [None req-ca8775b6-9a91-414b-91e2-5cdf2f4a29b5 66f5f3c3fea84dc59d8f4b0ce19fcf49 9995ae9ec275409eab70e1b7587c3571 - - default default] Security group member updated ['74b3fad2-e7e6-4bbe-a76b-524ed6175634']#033[00m Oct 5 06:06:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e143 e143: 6 total, 6 up, 6 in Oct 5 06:06:48 localhost nova_compute[297130]: 2025-10-05 10:06:48.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s rd, 1023 B/s wr, 7 op/s Oct 5 06:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:06:49 localhost nova_compute[297130]: 2025-10-05 10:06:49.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:49 localhost podman[331664]: 2025-10-05 10:06:49.952373634 +0000 UTC m=+0.113567220 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:06:49 localhost podman[331664]: 2025-10-05 10:06:49.965164301 +0000 UTC m=+0.126357877 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:06:49 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:06:49 localhost podman[331663]: 2025-10-05 10:06:49.929160155 +0000 UTC m=+0.094597556 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:50 localhost podman[331663]: 2025-10-05 10:06:50.009536643 +0000 UTC m=+0.174974094 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:06:50 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:50.018 2 INFO neutron.agent.securitygroups_rpc [None req-3706c1d5-eca7-468e-9b54-93969000b928 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:50 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:06:50 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 2 addresses Oct 5 06:06:50 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:50 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:50 localhost podman[331722]: 2025-10-05 10:06:50.268919836 +0000 UTC m=+0.059051662 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:50 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:50.599 2 INFO neutron.agent.securitygroups_rpc [None req-922985ae-8f2a-4562-a4ae-71819a7745cf 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:50 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:06:50 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:06:50 localhost podman[331759]: 2025-10-05 10:06:50.651054246 +0000 UTC m=+0.060161471 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:50 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:06:50 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e144 e144: 6 total, 6 up, 6 in Oct 5 06:06:50 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 1 addresses Oct 5 06:06:50 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:50 localhost podman[331794]: 2025-10-05 10:06:50.835471147 +0000 UTC m=+0.054386105 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:06:50 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:51 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:51.660 2 INFO neutron.agent.securitygroups_rpc [None req-de821bfe-e6c2-4d08-8692-047608f1121d 7f745b4b103a4291b31577d8ba527060 7d164b45ed944867815970d9328a76bf - - default default] Security group member updated ['0d3758d3-10cf-4853-9555-ec79169270af']#033[00m Oct 5 06:06:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 761 MiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 5 op/s Oct 5 06:06:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e145 e145: 6 total, 6 up, 6 in Oct 5 06:06:51 localhost dnsmasq[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/addn_hosts - 0 addresses Oct 5 06:06:51 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/host Oct 5 06:06:51 localhost podman[331833]: 2025-10-05 10:06:51.88011494 +0000 UTC m=+0.052843153 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:06:51 localhost dnsmasq-dhcp[331263]: read /var/lib/neutron/dhcp/ee88a216-df6e-45f1-9123-2d5675c416c1/opts Oct 5 06:06:52 localhost dnsmasq[331263]: exiting on receipt of SIGTERM Oct 5 06:06:52 localhost podman[331870]: 2025-10-05 10:06:52.745114034 +0000 UTC m=+0.064302775 container kill 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:52 localhost systemd[1]: libpod-90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75.scope: Deactivated successfully. Oct 5 06:06:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:06:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:06:52 localhost podman[331882]: 2025-10-05 10:06:52.827985041 +0000 UTC m=+0.066778372 container died 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:06:52 localhost systemd[1]: tmp-crun.2t4f0j.mount: Deactivated successfully. Oct 5 06:06:52 localhost podman[331882]: 2025-10-05 10:06:52.865910269 +0000 UTC m=+0.104703570 container cleanup 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 06:06:52 localhost systemd[1]: libpod-conmon-90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75.scope: Deactivated successfully. Oct 5 06:06:52 localhost podman[331888]: 2025-10-05 10:06:52.904044523 +0000 UTC m=+0.132470003 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:06:52 localhost podman[331888]: 2025-10-05 10:06:52.917291421 +0000 UTC m=+0.145716951 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:06:52 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:06:52 localhost podman[331884]: 2025-10-05 10:06:52.959407463 +0000 UTC m=+0.188606414 container remove 90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee88a216-df6e-45f1-9123-2d5675c416c1, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:06:52 localhost nova_compute[297130]: 2025-10-05 10:06:52.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:52 localhost ovn_controller[157556]: 2025-10-05T10:06:52Z|00144|binding|INFO|Releasing lport 9202269c-7b38-4952-853b-1cb6523adeff from this chassis (sb_readonly=0) Oct 5 06:06:52 localhost ovn_controller[157556]: 2025-10-05T10:06:52Z|00145|binding|INFO|Setting lport 9202269c-7b38-4952-853b-1cb6523adeff down in Southbound Oct 5 06:06:52 localhost kernel: device tap9202269c-7b left promiscuous mode Oct 5 06:06:52 localhost nova_compute[297130]: 2025-10-05 10:06:52.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:52.985 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-ee88a216-df6e-45f1-9123-2d5675c416c1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee88a216-df6e-45f1-9123-2d5675c416c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d164b45ed944867815970d9328a76bf', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7cf9eb1a-8031-4143-862c-6beb7f97d627, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9202269c-7b38-4952-853b-1cb6523adeff) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:52.988 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 9202269c-7b38-4952-853b-1cb6523adeff in datapath ee88a216-df6e-45f1-9123-2d5675c416c1 unbound from our chassis#033[00m Oct 5 06:06:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:52.990 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ee88a216-df6e-45f1-9123-2d5675c416c1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:52.990 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[61701815-6462-4cd4-b8f1-e11478af5a9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:53 localhost nova_compute[297130]: 2025-10-05 10:06:53.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:53 localhost podman[331891]: 2025-10-05 10:06:53.024664943 +0000 UTC m=+0.247481551 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:06:53 localhost podman[331891]: 2025-10-05 10:06:53.070187177 +0000 UTC m=+0.293003825 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:06:53 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:06:53 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:53.354 271653 INFO neutron.agent.dhcp.agent [None req-57a9e4b3-51fd-4171-9213-8a3b3243979b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:53 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:53.665 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s Oct 5 06:06:53 localhost nova_compute[297130]: 2025-10-05 10:06:53.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:53 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:53.741 2 INFO neutron.agent.securitygroups_rpc [None req-0f6b70c8-8ba4-43a2-955f-5a485cb09cb4 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:53 localhost systemd[1]: tmp-crun.5xFq4Y.mount: Deactivated successfully. Oct 5 06:06:53 localhost systemd[1]: var-lib-containers-storage-overlay-448ea162da8c5de60bec0b33ca4ac8205df9a10bcd2f033f293a1cf50d5b4b25-merged.mount: Deactivated successfully. Oct 5 06:06:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90aa2c2e89c76d0e3f4e233a063bf45ce8fd517edf43b464217dc6d59a7e1c75-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:53 localhost systemd[1]: run-netns-qdhcp\x2dee88a216\x2ddf6e\x2d45f1\x2d9123\x2d2d5675c416c1.mount: Deactivated successfully. Oct 5 06:06:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:54.142 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:06:54 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3945027991' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:06:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:06:54 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3945027991' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:06:54 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:54.842 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:54 localhost nova_compute[297130]: 2025-10-05 10:06:54.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:54 localhost nova_compute[297130]: 2025-10-05 10:06:54.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:55.066 2 INFO neutron.agent.securitygroups_rpc [None req-f33575a7-63e5-4033-bff4-513fe0684cd0 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:55 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:55.114 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:55 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:55.188 271653 INFO neutron.agent.linux.ip_lib [None req-5373227a-7cd5-40ba-8faf-e3d1a9aab6a6 - - - - - -] Device tap33fdcaa1-fc cannot be used as it has no MAC address#033[00m Oct 5 06:06:55 localhost nova_compute[297130]: 2025-10-05 10:06:55.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost kernel: device tap33fdcaa1-fc entered promiscuous mode Oct 5 06:06:55 localhost nova_compute[297130]: 2025-10-05 10:06:55.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost ovn_controller[157556]: 2025-10-05T10:06:55Z|00146|binding|INFO|Claiming lport 33fdcaa1-fc53-415f-b1fe-20e757982aa9 for this chassis. Oct 5 06:06:55 localhost ovn_controller[157556]: 2025-10-05T10:06:55Z|00147|binding|INFO|33fdcaa1-fc53-415f-b1fe-20e757982aa9: Claiming unknown Oct 5 06:06:55 localhost NetworkManager[5970]: [1759658815.2173] manager: (tap33fdcaa1-fc): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Oct 5 06:06:55 localhost systemd-udevd[331968]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:06:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:55.234 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-d0b951eb-5c1b-44e3-95af-af05d4e71a6b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b951eb-5c1b-44e3-95af-af05d4e71a6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9995ae9ec275409eab70e1b7587c3571', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=986d18e6-e5e1-424c-86dc-c434a08e0256, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=33fdcaa1-fc53-415f-b1fe-20e757982aa9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:55.235 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 33fdcaa1-fc53-415f-b1fe-20e757982aa9 in datapath d0b951eb-5c1b-44e3-95af-af05d4e71a6b bound to our chassis#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:55.236 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d0b951eb-5c1b-44e3-95af-af05d4e71a6b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:55 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:55.236 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[affab305-928b-4f0c-8fb1-3f1f938108c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost ovn_controller[157556]: 2025-10-05T10:06:55Z|00148|binding|INFO|Setting lport 33fdcaa1-fc53-415f-b1fe-20e757982aa9 ovn-installed in OVS Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost nova_compute[297130]: 2025-10-05 10:06:55.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost ovn_controller[157556]: 2025-10-05T10:06:55Z|00149|binding|INFO|Setting lport 33fdcaa1-fc53-415f-b1fe-20e757982aa9 up in Southbound Oct 5 06:06:55 localhost nova_compute[297130]: 2025-10-05 10:06:55.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost journal[237639]: ethtool ioctl error on tap33fdcaa1-fc: No such device Oct 5 06:06:55 localhost nova_compute[297130]: 2025-10-05 10:06:55.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost nova_compute[297130]: 2025-10-05 10:06:55.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:55 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:55.551 2 INFO neutron.agent.securitygroups_rpc [None req-028ca752-7a11-4005-953f-28a11d2ad44f cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.9 KiB/s wr, 49 op/s Oct 5 06:06:56 localhost podman[248157]: time="2025-10-05T10:06:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:06:56 localhost podman[248157]: @ - - [05/Oct/2025:10:06:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:06:56 localhost podman[248157]: @ - - [05/Oct/2025:10:06:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19326 "" "Go-http-client/1.1" Oct 5 06:06:56 localhost podman[332039]: Oct 5 06:06:56 localhost podman[332039]: 2025-10-05 10:06:56.285261677 +0000 UTC m=+0.103831316 container create 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:06:56 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:56.292 2 INFO neutron.agent.securitygroups_rpc [None req-be1c75d1-6103-45bc-ac58-5c31557614fb cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:56 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:56.315 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:56 localhost systemd[1]: Started libpod-conmon-2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334.scope. Oct 5 06:06:56 localhost podman[332039]: 2025-10-05 10:06:56.23338722 +0000 UTC m=+0.051956869 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:06:56 localhost systemd[1]: Started libcrun container. Oct 5 06:06:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05d518b6b88c9f9e85a53da7216a46cd661bbe3e76391b9a9571958098856af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:06:56 localhost podman[332039]: 2025-10-05 10:06:56.360939369 +0000 UTC m=+0.179509018 container init 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:06:56 localhost podman[332039]: 2025-10-05 10:06:56.371352411 +0000 UTC m=+0.189922050 container start 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:56 localhost dnsmasq[332057]: started, version 2.85 cachesize 150 Oct 5 06:06:56 localhost dnsmasq[332057]: DNS service limited to local subnets Oct 5 06:06:56 localhost dnsmasq[332057]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:06:56 localhost dnsmasq[332057]: warning: no upstream servers configured Oct 5 06:06:56 localhost dnsmasq-dhcp[332057]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:06:56 localhost dnsmasq[332057]: read /var/lib/neutron/dhcp/d0b951eb-5c1b-44e3-95af-af05d4e71a6b/addn_hosts - 0 addresses Oct 5 06:06:56 localhost dnsmasq-dhcp[332057]: read /var/lib/neutron/dhcp/d0b951eb-5c1b-44e3-95af-af05d4e71a6b/host Oct 5 06:06:56 localhost dnsmasq-dhcp[332057]: read /var/lib/neutron/dhcp/d0b951eb-5c1b-44e3-95af-af05d4e71a6b/opts Oct 5 06:06:56 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:56.628 271653 INFO neutron.agent.dhcp.agent [None req-b7bb3521-6a34-4483-8348-dfd48dfd5c42 - - - - - -] DHCP configuration for ports {'a0016316-207a-4a08-8de3-b82ffda876fb'} is completed#033[00m Oct 5 06:06:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e146 e146: 6 total, 6 up, 6 in Oct 5 06:06:56 localhost ovn_controller[157556]: 2025-10-05T10:06:56Z|00150|binding|INFO|Removing iface tap33fdcaa1-fc ovn-installed in OVS Oct 5 06:06:56 localhost ovn_controller[157556]: 2025-10-05T10:06:56Z|00151|binding|INFO|Removing lport 33fdcaa1-fc53-415f-b1fe-20e757982aa9 ovn-installed in OVS Oct 5 06:06:56 localhost nova_compute[297130]: 2025-10-05 10:06:56.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:56.836 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port e1d0211b-379c-4202-af79-80b5441338d6 with type ""#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:56.838 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-d0b951eb-5c1b-44e3-95af-af05d4e71a6b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b951eb-5c1b-44e3-95af-af05d4e71a6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9995ae9ec275409eab70e1b7587c3571', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=986d18e6-e5e1-424c-86dc-c434a08e0256, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=33fdcaa1-fc53-415f-b1fe-20e757982aa9) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:56.840 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 33fdcaa1-fc53-415f-b1fe-20e757982aa9 in datapath d0b951eb-5c1b-44e3-95af-af05d4e71a6b unbound from our chassis#033[00m Oct 5 06:06:56 localhost nova_compute[297130]: 2025-10-05 10:06:56.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:56.842 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d0b951eb-5c1b-44e3-95af-af05d4e71a6b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:06:56 localhost ovn_metadata_agent[163196]: 2025-10-05 10:06:56.843 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[e5048f91-be0e-4a3b-901e-e93c3ad2b225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:06:56 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:56.878 2 INFO neutron.agent.securitygroups_rpc [None req-9f697f19-19c5-4264-bc4a-40176852ecfc cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:56 localhost dnsmasq[332057]: exiting on receipt of SIGTERM Oct 5 06:06:56 localhost podman[332076]: 2025-10-05 10:06:56.91091859 +0000 UTC m=+0.063653177 container kill 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:06:56 localhost systemd[1]: libpod-2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334.scope: Deactivated successfully. Oct 5 06:06:56 localhost podman[332089]: 2025-10-05 10:06:56.986020246 +0000 UTC m=+0.060550473 container died 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:06:57 localhost podman[332089]: 2025-10-05 10:06:57.068656746 +0000 UTC m=+0.143186973 container cleanup 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:06:57 localhost systemd[1]: libpod-conmon-2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334.scope: Deactivated successfully. Oct 5 06:06:57 localhost podman[332096]: 2025-10-05 10:06:57.090342504 +0000 UTC m=+0.146424820 container remove 2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d0b951eb-5c1b-44e3-95af-af05d4e71a6b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:06:57 localhost nova_compute[297130]: 2025-10-05 10:06:57.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:57 localhost kernel: device tap33fdcaa1-fc left promiscuous mode Oct 5 06:06:57 localhost nova_compute[297130]: 2025-10-05 10:06:57.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:57.135 271653 INFO neutron.agent.dhcp.agent [None req-3575a425-946a-4d61-a433-872170669ca4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:57 localhost systemd[1]: var-lib-containers-storage-overlay-a05d518b6b88c9f9e85a53da7216a46cd661bbe3e76391b9a9571958098856af-merged.mount: Deactivated successfully. Oct 5 06:06:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2bc5af4659322eb8d0fccd33633641b1013dd55626d4a473aa94c836de9af334-userdata-shm.mount: Deactivated successfully. Oct 5 06:06:57 localhost systemd[1]: run-netns-qdhcp\x2dd0b951eb\x2d5c1b\x2d44e3\x2d95af\x2daf05d4e71a6b.mount: Deactivated successfully. Oct 5 06:06:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:57.378 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 4.3 KiB/s wr, 97 op/s Oct 5 06:06:57 localhost neutron_sriov_agent[264647]: 2025-10-05 10:06:57.843 2 INFO neutron.agent.securitygroups_rpc [None req-a90c418c-ab0e-42ad-9460-79491727dda5 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:06:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:06:57.876 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:06:57 localhost nova_compute[297130]: 2025-10-05 10:06:57.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:58 localhost nova_compute[297130]: 2025-10-05 10:06:58.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:06:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e147 e147: 6 total, 6 up, 6 in Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.878355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818878398, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1475, "num_deletes": 258, "total_data_size": 1957894, "memory_usage": 1986800, "flush_reason": "Manual Compaction"} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818888008, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1279056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21208, "largest_seqno": 22678, "table_properties": {"data_size": 1273020, "index_size": 3378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13471, "raw_average_key_size": 21, "raw_value_size": 1260709, "raw_average_value_size": 1976, "num_data_blocks": 147, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658736, "oldest_key_time": 1759658736, "file_creation_time": 1759658818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 9772 microseconds, and 4290 cpu microseconds. Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.888104) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1279056 bytes OK Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.888175) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.889976) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.890002) EVENT_LOG_v1 {"time_micros": 1759658818889995, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.890022) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1950875, prev total WAL file size 1950875, number of live WAL files 2. Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.890905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1249KB)], [33(14MB)] Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818890939, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 16370794, "oldest_snapshot_seqno": -1} Oct 5 06:06:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12535 keys, 14501135 bytes, temperature: kUnknown Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818974946, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 14501135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14432676, "index_size": 36038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31365, "raw_key_size": 339010, "raw_average_key_size": 27, "raw_value_size": 14222136, "raw_average_value_size": 1134, "num_data_blocks": 1335, "num_entries": 12535, "num_filter_entries": 12535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.975244) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14501135 bytes Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.976846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.6 rd, 172.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(24.1) write-amplify(11.3) OK, records in: 13066, records dropped: 531 output_compression: NoCompression Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.976877) EVENT_LOG_v1 {"time_micros": 1759658818976862, "job": 18, "event": "compaction_finished", "compaction_time_micros": 84120, "compaction_time_cpu_micros": 43155, "output_level": 6, "num_output_files": 1, "total_output_size": 14501135, "num_input_records": 13066, "num_output_records": 12535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818977308, "job": 18, "event": "table_file_deletion", "file_number": 35} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658818979632, "job": 18, "event": "table_file_deletion", "file_number": 33} Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.890840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.979689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.979696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.979699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.979702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:58 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:06:58.979704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:06:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.8 KiB/s wr, 85 op/s Oct 5 06:06:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e148 e148: 6 total, 6 up, 6 in Oct 5 06:06:59 localhost nova_compute[297130]: 2025-10-05 10:06:59.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:01.103 271653 INFO neutron.agent.linux.ip_lib [None req-c98b29e2-64aa-4e75-a006-320dee647d66 - - - - - -] Device tap99b81bcf-16 cannot be used as it has no MAC address#033[00m Oct 5 06:07:01 localhost nova_compute[297130]: 2025-10-05 10:07:01.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost kernel: device tap99b81bcf-16 entered promiscuous mode Oct 5 06:07:01 localhost NetworkManager[5970]: [1759658821.1379] manager: (tap99b81bcf-16): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Oct 5 06:07:01 localhost nova_compute[297130]: 2025-10-05 10:07:01.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost ovn_controller[157556]: 2025-10-05T10:07:01Z|00152|binding|INFO|Claiming lport 99b81bcf-160d-4b11-90ae-9edc3102d722 for this chassis. Oct 5 06:07:01 localhost ovn_controller[157556]: 2025-10-05T10:07:01Z|00153|binding|INFO|99b81bcf-160d-4b11-90ae-9edc3102d722: Claiming unknown Oct 5 06:07:01 localhost systemd-udevd[332130]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:01.150 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-d7356bc2-2948-4361-bd54-4e286b5582fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7356bc2-2948-4361-bd54-4e286b5582fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e182d520-c828-4ed6-90d2-a4899973b068, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=99b81bcf-160d-4b11-90ae-9edc3102d722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:01.152 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 99b81bcf-160d-4b11-90ae-9edc3102d722 in datapath d7356bc2-2948-4361-bd54-4e286b5582fc bound to our chassis#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:01.153 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d7356bc2-2948-4361-bd54-4e286b5582fc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:01 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:01.155 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[b85f08b5-c8e4-4974-8b91-0d685238355b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost ovn_controller[157556]: 2025-10-05T10:07:01Z|00154|binding|INFO|Setting lport 99b81bcf-160d-4b11-90ae-9edc3102d722 ovn-installed in OVS Oct 5 06:07:01 localhost ovn_controller[157556]: 2025-10-05T10:07:01Z|00155|binding|INFO|Setting lport 99b81bcf-160d-4b11-90ae-9edc3102d722 up in Southbound Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost nova_compute[297130]: 2025-10-05 10:07:01.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost journal[237639]: ethtool ioctl error on tap99b81bcf-16: No such device Oct 5 06:07:01 localhost nova_compute[297130]: 2025-10-05 10:07:01.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost nova_compute[297130]: 2025-10-05 10:07:01.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:01 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:01.547 2 INFO neutron.agent.securitygroups_rpc [None req-967fdf5d-46e1-48e2-b529-a5dcf26b47fa cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 1.7 KiB/s wr, 55 op/s Oct 5 06:07:02 localhost podman[332201]: Oct 5 06:07:02 localhost podman[332201]: 2025-10-05 10:07:02.192263 +0000 UTC m=+0.095701795 container create ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:07:02 localhost systemd[1]: Started libpod-conmon-ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87.scope. Oct 5 06:07:02 localhost podman[332201]: 2025-10-05 10:07:02.144847154 +0000 UTC m=+0.048285959 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:02 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:02.253 2 INFO neutron.agent.securitygroups_rpc [None req-e438c14c-9f1f-4532-9e26-9190e47ce09e cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:02 localhost systemd[1]: Started libcrun container. Oct 5 06:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe3a8376622225e64ecf2c7bfbaeb7e3477c21a3976c925e451d4486e4076b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:02 localhost podman[332201]: 2025-10-05 10:07:02.271297093 +0000 UTC m=+0.174735888 container init ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:07:02 localhost podman[332201]: 2025-10-05 10:07:02.28188908 +0000 UTC m=+0.185327895 container start ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:07:02 localhost dnsmasq[332219]: started, version 2.85 cachesize 150 Oct 5 06:07:02 localhost dnsmasq[332219]: DNS service limited to local subnets Oct 5 06:07:02 localhost dnsmasq[332219]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:02 localhost dnsmasq[332219]: warning: no upstream servers configured Oct 5 06:07:02 localhost dnsmasq-dhcp[332219]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:02 localhost dnsmasq[332219]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/addn_hosts - 0 addresses Oct 5 06:07:02 localhost dnsmasq-dhcp[332219]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/host Oct 5 06:07:02 localhost dnsmasq-dhcp[332219]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/opts Oct 5 06:07:02 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:02.438 271653 INFO neutron.agent.dhcp.agent [None req-d1d4c5b2-a9e8-4ae6-9bf7-d3333871849d - - - - - -] DHCP configuration for ports {'aa5d7650-b045-4b83-ab7d-afdcbb2e55d9'} is completed#033[00m Oct 5 06:07:02 localhost dnsmasq[332219]: exiting on receipt of SIGTERM Oct 5 06:07:02 localhost podman[332237]: 2025-10-05 10:07:02.630504312 +0000 UTC m=+0.063651207 container kill ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:07:02 localhost systemd[1]: libpod-ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87.scope: Deactivated successfully. Oct 5 06:07:02 localhost podman[332251]: 2025-10-05 10:07:02.694481647 +0000 UTC m=+0.046650026 container died ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:02 localhost podman[332251]: 2025-10-05 10:07:02.77610298 +0000 UTC m=+0.128271319 container cleanup ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:02 localhost systemd[1]: libpod-conmon-ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87.scope: Deactivated successfully. Oct 5 06:07:02 localhost podman[332252]: 2025-10-05 10:07:02.801733315 +0000 UTC m=+0.148348783 container remove ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:03.147 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:02Z, description=, device_id=e9213023-1348-4ffa-914b-4f92cfcd4a65, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dfd5851f-4fe4-49dd-892a-840058463c8d, ip_allocation=immediate, mac_address=fa:16:3e:5a:ef:e5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1883, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:07:02Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:07:03 localhost systemd[1]: var-lib-containers-storage-overlay-fbe3a8376622225e64ecf2c7bfbaeb7e3477c21a3976c925e451d4486e4076b6-merged.mount: Deactivated successfully. Oct 5 06:07:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceabeb524994e0dffc67ea8920ca0b8f0331e88a56037ad8b2fb0d500153aa87-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:03 localhost podman[332296]: 2025-10-05 10:07:03.356112485 +0000 UTC m=+0.062838915 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:07:03 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:07:03 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:03 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:03 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:03.587 271653 INFO neutron.agent.dhcp.agent [None req-063203b0-b8b6-46ac-89df-f601c237b1f1 - - - - - -] DHCP configuration for ports {'dfd5851f-4fe4-49dd-892a-840058463c8d'} is completed#033[00m Oct 5 06:07:03 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:03.667 2 INFO neutron.agent.securitygroups_rpc [None req-113037f4-e6a1-483a-9c1d-2f96e0139405 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 3.3 KiB/s wr, 97 op/s Oct 5 06:07:03 localhost nova_compute[297130]: 2025-10-05 10:07:03.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:04 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:04.036 2 INFO neutron.agent.securitygroups_rpc [None req-3496327f-e2d7-49dc-ab01-143e65f195ad 6ef678b66aca4c389c46bd32e9f75f44 8b0117c734aa4a26be5c16b9cc3abffe - - default default] Security group rule updated ['34787280-e67a-4595-a7a5-2948c88f70c0']#033[00m Oct 5 06:07:04 localhost podman[332369]: Oct 5 06:07:04 localhost podman[332369]: 2025-10-05 10:07:04.412489146 +0000 UTC m=+0.094733210 container create e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:07:04 localhost systemd[1]: Started libpod-conmon-e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c.scope. Oct 5 06:07:04 localhost systemd[1]: Started libcrun container. Oct 5 06:07:04 localhost podman[332369]: 2025-10-05 10:07:04.369038658 +0000 UTC m=+0.051282742 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d8bea2c8a2f357c5cbcd3083c45667e0cf9930bc9693694719bee47fe3088f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:04 localhost podman[332369]: 2025-10-05 10:07:04.479736009 +0000 UTC m=+0.161980073 container init e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:04 localhost podman[332369]: 2025-10-05 10:07:04.4908235 +0000 UTC m=+0.173067564 container start e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:07:04 localhost dnsmasq[332405]: started, version 2.85 cachesize 150 Oct 5 06:07:04 localhost dnsmasq[332405]: DNS service limited to local subnets Oct 5 06:07:04 localhost dnsmasq[332405]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:04 localhost dnsmasq[332405]: warning: no upstream servers configured Oct 5 06:07:04 localhost dnsmasq-dhcp[332405]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:07:04 localhost dnsmasq-dhcp[332405]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:04 localhost dnsmasq[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/addn_hosts - 1 addresses Oct 5 06:07:04 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/host Oct 5 06:07:04 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/opts Oct 5 06:07:04 localhost podman[332383]: 2025-10-05 10:07:04.540782065 +0000 UTC m=+0.088602944 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:04.545 271653 INFO neutron.agent.dhcp.agent [None req-01e7f6c7-2e3e-42b7-b278-bddd54733e25 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:01Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=d2b27196-2680-4573-918c-4ce46999457d, ip_allocation=immediate, mac_address=fa:16:3e:b3:05:73, name=tempest-PortsIpV6TestJSON-1206872519, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:58Z, description=, dns_domain=, id=d7356bc2-2948-4361-bd54-4e286b5582fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-8530312, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=110, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=1862, status=ACTIVE, subnets=['377bf24a-9db5-4a8a-bd41-658b7b7ace42', '9bed8eec-a6d8-42eb-8767-6050609d4486'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:00Z, vlan_transparent=None, network_id=d7356bc2-2948-4361-bd54-4e286b5582fc, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=1876, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:01Z on network d7356bc2-2948-4361-bd54-4e286b5582fc#033[00m Oct 5 06:07:04 localhost podman[332383]: 2025-10-05 10:07:04.557242541 +0000 UTC m=+0.105063390 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:04 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:07:04 localhost podman[332384]: 2025-10-05 10:07:04.648570697 +0000 UTC m=+0.193445606 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:07:04 localhost podman[332449]: 2025-10-05 10:07:04.738366991 +0000 UTC m=+0.057466418 container kill e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:07:04 localhost dnsmasq[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/addn_hosts - 2 addresses Oct 5 06:07:04 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/host Oct 5 06:07:04 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/opts Oct 5 06:07:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:04.748 271653 INFO neutron.agent.dhcp.agent [None req-c5b92d79-0fa6-4b61-be39-432a9d44d27f - - - - - -] DHCP configuration for ports {'99b81bcf-160d-4b11-90ae-9edc3102d722', 'd2b27196-2680-4573-918c-4ce46999457d', 'aa5d7650-b045-4b83-ab7d-afdcbb2e55d9'} is completed#033[00m Oct 5 06:07:04 localhost podman[332384]: 2025-10-05 10:07:04.778919811 +0000 UTC m=+0.323794780 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:07:04 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:07:04 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:04.799 2 INFO neutron.agent.securitygroups_rpc [None req-10105f5e-927c-4e7a-9302-4d7e601f16d3 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:04 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:04.900 271653 INFO neutron.agent.dhcp.agent [None req-01e7f6c7-2e3e-42b7-b278-bddd54733e25 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:01Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d2b27196-2680-4573-918c-4ce46999457d, ip_allocation=immediate, mac_address=fa:16:3e:b3:05:73, name=tempest-PortsIpV6TestJSON-1206872519, network_id=d7356bc2-2948-4361-bd54-4e286b5582fc, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=1876, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:02Z on network d7356bc2-2948-4361-bd54-4e286b5582fc#033[00m Oct 5 06:07:04 localhost nova_compute[297130]: 2025-10-05 10:07:04.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:05.012 271653 INFO neutron.agent.dhcp.agent [None req-130ba5ad-8bf2-44c1-921c-e450c1d8dfc4 - - - - - -] DHCP configuration for ports {'d2b27196-2680-4573-918c-4ce46999457d'} is completed#033[00m Oct 5 06:07:05 localhost dnsmasq[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/addn_hosts - 1 addresses Oct 5 06:07:05 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/host Oct 5 06:07:05 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/opts Oct 5 06:07:05 localhost podman[332489]: 2025-10-05 10:07:05.082594864 +0000 UTC m=+0.057376546 container kill e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:05.235 271653 INFO neutron.agent.dhcp.agent [None req-01e7f6c7-2e3e-42b7-b278-bddd54733e25 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:01Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=d2b27196-2680-4573-918c-4ce46999457d, ip_allocation=immediate, mac_address=fa:16:3e:b3:05:73, name=tempest-PortsIpV6TestJSON-1206872519, network_id=d7356bc2-2948-4361-bd54-4e286b5582fc, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['72863814-32f3-4006-a64f-d6dada584ee1'], standard_attr_id=1876, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:03Z on network d7356bc2-2948-4361-bd54-4e286b5582fc#033[00m Oct 5 06:07:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:05.334 271653 INFO neutron.agent.dhcp.agent [None req-485d19be-0b66-43b8-9126-a85460c3bf7a - - - - - -] DHCP configuration for ports {'d2b27196-2680-4573-918c-4ce46999457d'} is completed#033[00m Oct 5 06:07:05 localhost podman[332525]: 2025-10-05 10:07:05.431534875 +0000 UTC m=+0.062244699 container kill e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:05 localhost dnsmasq[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/addn_hosts - 2 addresses Oct 5 06:07:05 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/host Oct 5 06:07:05 localhost dnsmasq-dhcp[332405]: read /var/lib/neutron/dhcp/d7356bc2-2948-4361-bd54-4e286b5582fc/opts Oct 5 06:07:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 43 op/s Oct 5 06:07:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:05.688 271653 INFO neutron.agent.dhcp.agent [None req-5d9469d6-f536-4cff-9aaf-16d810a10251 - - - - - -] DHCP configuration for ports {'d2b27196-2680-4573-918c-4ce46999457d'} is completed#033[00m Oct 5 06:07:06 localhost dnsmasq[332405]: exiting on receipt of SIGTERM Oct 5 06:07:06 localhost podman[332562]: 2025-10-05 10:07:06.058421061 +0000 UTC m=+0.048457725 container kill e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001) Oct 5 06:07:06 localhost systemd[1]: libpod-e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c.scope: Deactivated successfully. Oct 5 06:07:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:07:06 localhost podman[332576]: 2025-10-05 10:07:06.127475234 +0000 UTC m=+0.051290922 container died e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:07:06 localhost podman[332576]: 2025-10-05 10:07:06.192052574 +0000 UTC m=+0.115868222 container remove e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7356bc2-2948-4361-bd54-4e286b5582fc, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:06 localhost podman[332587]: 2025-10-05 10:07:06.174727525 +0000 UTC m=+0.086318752 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 06:07:06 localhost podman[332587]: 2025-10-05 10:07:06.258127016 +0000 UTC m=+0.169718213 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Oct 5 06:07:06 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:07:06 localhost systemd[1]: libpod-conmon-e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c.scope: Deactivated successfully. Oct 5 06:07:06 localhost systemd[1]: tmp-crun.gLOKSt.mount: Deactivated successfully. Oct 5 06:07:06 localhost systemd[1]: var-lib-containers-storage-overlay-c1d8bea2c8a2f357c5cbcd3083c45667e0cf9930bc9693694719bee47fe3088f-merged.mount: Deactivated successfully. Oct 5 06:07:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0ae747a1094099e5b48bab57427bdab984c616e18de9e4a5e0e12ea57b34e8c-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:06 localhost kernel: device tap99b81bcf-16 left promiscuous mode Oct 5 06:07:06 localhost nova_compute[297130]: 2025-10-05 10:07:06.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:06 localhost ovn_controller[157556]: 2025-10-05T10:07:06Z|00156|binding|INFO|Releasing lport 99b81bcf-160d-4b11-90ae-9edc3102d722 from this chassis (sb_readonly=0) Oct 5 06:07:06 localhost ovn_controller[157556]: 2025-10-05T10:07:06Z|00157|binding|INFO|Setting lport 99b81bcf-160d-4b11-90ae-9edc3102d722 down in Southbound Oct 5 06:07:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:06.524 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-d7356bc2-2948-4361-bd54-4e286b5582fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7356bc2-2948-4361-bd54-4e286b5582fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e182d520-c828-4ed6-90d2-a4899973b068, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=99b81bcf-160d-4b11-90ae-9edc3102d722) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:06.526 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 99b81bcf-160d-4b11-90ae-9edc3102d722 in datapath d7356bc2-2948-4361-bd54-4e286b5582fc unbound from our chassis#033[00m Oct 5 06:07:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:06.527 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d7356bc2-2948-4361-bd54-4e286b5582fc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:06.528 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[791d168b-739f-4fe3-9c78-095aaacf7109]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:06 localhost nova_compute[297130]: 2025-10-05 10:07:06.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:06 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:06.560 271653 INFO neutron.agent.dhcp.agent [None req-345f9171-78ad-4803-9ac9-10b8d8a0ba17 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:06 localhost systemd[1]: run-netns-qdhcp\x2dd7356bc2\x2d2948\x2d4361\x2dbd54\x2d4e286b5582fc.mount: Deactivated successfully. Oct 5 06:07:06 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:06.564 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:06 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:06.576 271653 INFO neutron.agent.dhcp.agent [None req-7465743c-d36b-44ec-89c3-4f81ef98df71 - - - - - -] DHCP configuration for ports {'99b81bcf-160d-4b11-90ae-9edc3102d722', 'aa5d7650-b045-4b83-ab7d-afdcbb2e55d9'} is completed#033[00m Oct 5 06:07:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 e149: 6 total, 6 up, 6 in Oct 5 06:07:07 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:07.278 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 43 op/s Oct 5 06:07:07 localhost nova_compute[297130]: 2025-10-05 10:07:07.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:08 localhost nova_compute[297130]: 2025-10-05 10:07:08.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:08 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:08.825 2 INFO neutron.agent.securitygroups_rpc [None req-97d4cddd-c529-446f-b001-82d6e8bc8d22 ab7690a92b524e11ab2ac3dec938162a 32b7a2f31633456293e1c4169c868ef0 - - default default] Security group member updated ['bc75949e-95f2-4d6f-bfbc-251e7f7ef75d']#033[00m Oct 5 06:07:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 35 op/s Oct 5 06:07:09 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:09.957 271653 INFO neutron.agent.linux.ip_lib [None req-e1b8fe44-aecf-480d-80ca-ee19cbbc6cdf - - - - - -] Device tap87cdc8bc-ef cannot be used as it has no MAC address#033[00m Oct 5 06:07:09 localhost nova_compute[297130]: 2025-10-05 10:07:09.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost kernel: device tap87cdc8bc-ef entered promiscuous mode Oct 5 06:07:10 localhost ovn_controller[157556]: 2025-10-05T10:07:10Z|00158|binding|INFO|Claiming lport 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a for this chassis. Oct 5 06:07:10 localhost ovn_controller[157556]: 2025-10-05T10:07:10Z|00159|binding|INFO|87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a: Claiming unknown Oct 5 06:07:10 localhost NetworkManager[5970]: [1759658830.0023] manager: (tap87cdc8bc-ef): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Oct 5 06:07:10 localhost nova_compute[297130]: 2025-10-05 10:07:10.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost systemd-udevd[332631]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:10 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:10.011 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-ba615150-6291-461e-914d-8614dd64d36b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba615150-6291-461e-914d-8614dd64d36b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f191af5fd15547479e573ab11c825146', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=025f706c-92d7-4ab9-940b-451c21f74524, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:10.012 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a in datapath ba615150-6291-461e-914d-8614dd64d36b bound to our chassis#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:10.013 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ba615150-6291-461e-914d-8614dd64d36b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:10.015 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[f929356a-b801-4d5e-a2ec-4671d1aac3ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost ovn_controller[157556]: 2025-10-05T10:07:10Z|00160|binding|INFO|Setting lport 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a ovn-installed in OVS Oct 5 06:07:10 localhost ovn_controller[157556]: 2025-10-05T10:07:10Z|00161|binding|INFO|Setting lport 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a up in Southbound Oct 5 06:07:10 localhost nova_compute[297130]: 2025-10-05 10:07:10.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost journal[237639]: ethtool ioctl error on tap87cdc8bc-ef: No such device Oct 5 06:07:10 localhost nova_compute[297130]: 2025-10-05 10:07:10.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost nova_compute[297130]: 2025-10-05 10:07:10.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost nova_compute[297130]: 2025-10-05 10:07:10.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:10.697 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:10 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:10.698 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:07:10 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:10.931 2 INFO neutron.agent.securitygroups_rpc [None req-86d0464d-16db-48b0-8aaa-42e0721c5e19 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['b41d26b0-78a8-4541-9b0c-eb273b0740f6']#033[00m Oct 5 06:07:10 localhost podman[332702]: Oct 5 06:07:10 localhost podman[332702]: 2025-10-05 10:07:10.989359062 +0000 UTC m=+0.090352042 container create 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:07:11 localhost systemd[1]: Started libpod-conmon-48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc.scope. Oct 5 06:07:11 localhost podman[332702]: 2025-10-05 10:07:10.946353855 +0000 UTC m=+0.047346845 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:11 localhost systemd[1]: Started libcrun container. Oct 5 06:07:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f7ae6c7d41fd299d0cb05acc8e501524bebb57650ce902e98d5a0e29632750/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:11 localhost podman[332702]: 2025-10-05 10:07:11.065990539 +0000 UTC m=+0.166983489 container init 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:07:11 localhost podman[332702]: 2025-10-05 10:07:11.074046888 +0000 UTC m=+0.175039828 container start 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:07:11 localhost dnsmasq[332719]: started, version 2.85 cachesize 150 Oct 5 06:07:11 localhost dnsmasq[332719]: DNS service limited to local subnets Oct 5 06:07:11 localhost dnsmasq[332719]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:11 localhost dnsmasq[332719]: warning: no upstream servers configured Oct 5 06:07:11 localhost dnsmasq-dhcp[332719]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:07:11 localhost dnsmasq[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/addn_hosts - 0 addresses Oct 5 06:07:11 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/host Oct 5 06:07:11 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/opts Oct 5 06:07:11 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:11.127 2 INFO neutron.agent.securitygroups_rpc [None req-e8fe7c87-18bf-46af-bac6-b9952ace8090 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:11 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:11.284 271653 INFO neutron.agent.dhcp.agent [None req-6f5ebb09-9912-463b-9080-2ca4d8b1d50e - - - - - -] DHCP configuration for ports {'e2236bb8-a5a9-48a9-94be-f841308e054e'} is completed#033[00m Oct 5 06:07:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:07:11 Oct 5 06:07:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:07:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:07:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['backups', 'manila_data', 'images', 'manila_metadata', '.mgr', 'volumes', 'vms'] Oct 5 06:07:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:07:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:07:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:07:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:07:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:07:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.3 KiB/s wr, 35 op/s Oct 5 06:07:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:07:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:07:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:07:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:07:11 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:11.923 2 INFO neutron.agent.securitygroups_rpc [None req-7ac4c9ba-a526-4e82-a322-0547237d9260 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['b41d26b0-78a8-4541-9b0c-eb273b0740f6']#033[00m Oct 5 06:07:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:07:12 localhost systemd[1]: tmp-crun.zE7hLh.mount: Deactivated successfully. Oct 5 06:07:12 localhost podman[332720]: 2025-10-05 10:07:12.11090699 +0000 UTC m=+0.091151703 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:07:12 localhost podman[332720]: 2025-10-05 10:07:12.119210055 +0000 UTC m=+0.099454778 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:07:12 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:07:12 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:12.461 2 INFO neutron.agent.securitygroups_rpc [None req-1ca8277a-d8c6-48e2-b92c-b668db68cd12 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:12 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:12.664 2 INFO neutron.agent.securitygroups_rpc [None req-f97f3ad6-e6d9-428a-8c53-4f6a2eec87df ab7690a92b524e11ab2ac3dec938162a 32b7a2f31633456293e1c4169c868ef0 - - default default] Security group member updated ['bc75949e-95f2-4d6f-bfbc-251e7f7ef75d']#033[00m Oct 5 06:07:13 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:13.004 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:12Z, description=, device_id=4d1c82d4-a9f8-4cdf-bb7e-82024e7ca3e2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e8d11154-4a27-404b-9574-4f8d0bd39266, ip_allocation=immediate, mac_address=fa:16:3e:1c:13:c6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1987, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:07:12Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:07:13 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:07:13 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:13 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:13 localhost podman[332756]: 2025-10-05 10:07:13.218741956 +0000 UTC m=+0.063776759 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:07:13 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:13.450 271653 INFO neutron.agent.dhcp.agent [None req-552e2f87-9ba5-4669-b809-47d0434f1b1e - - - - - -] DHCP configuration for ports {'e8d11154-4a27-404b-9574-4f8d0bd39266'} is completed#033[00m Oct 5 06:07:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:13 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:13.700 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:07:13 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:13.779 2 INFO neutron.agent.securitygroups_rpc [None req-08df03d9-f357-456c-9850-e7c7f1852f03 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:13 localhost nova_compute[297130]: 2025-10-05 10:07:13.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:13 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:13.877 2 INFO neutron.agent.securitygroups_rpc [None req-f76f0cd7-1a5d-417c-a073-8a4d0637b1a6 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:14 localhost nova_compute[297130]: 2025-10-05 10:07:14.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:14 localhost nova_compute[297130]: 2025-10-05 10:07:14.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:14.287 2 INFO neutron.agent.securitygroups_rpc [None req-a10b7f8d-1ef0-4a56-992e-6e3aa0314a60 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:14.810 2 INFO neutron.agent.securitygroups_rpc [None req-209ca919-c5d6-45c0-a2a0-c1d200e5208b cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:14.959 2 INFO neutron.agent.securitygroups_rpc [None req-55de9688-4b87-4f8a-82a3-6735de35f494 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:15 localhost nova_compute[297130]: 2025-10-05 10:07:15.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:15 localhost nova_compute[297130]: 2025-10-05 10:07:15.269 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:15 localhost nova_compute[297130]: 2025-10-05 10:07:15.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:16 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:16.058 2 INFO neutron.agent.securitygroups_rpc [None req-d262c3fd-e84a-4a8c-a542-df0ebe7f723a 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:16 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:16.209 271653 INFO neutron.agent.linux.ip_lib [None req-54894ca7-f9e1-45aa-a5fc-fc3a977e8c16 - - - - - -] Device tap6d79d237-7e cannot be used as it has no MAC address#033[00m Oct 5 06:07:16 localhost nova_compute[297130]: 2025-10-05 10:07:16.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost kernel: device tap6d79d237-7e entered promiscuous mode Oct 5 06:07:16 localhost ovn_controller[157556]: 2025-10-05T10:07:16Z|00162|binding|INFO|Claiming lport 6d79d237-7ef0-4c47-9774-80abaebb109a for this chassis. Oct 5 06:07:16 localhost ovn_controller[157556]: 2025-10-05T10:07:16Z|00163|binding|INFO|6d79d237-7ef0-4c47-9774-80abaebb109a: Claiming unknown Oct 5 06:07:16 localhost nova_compute[297130]: 2025-10-05 10:07:16.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost NetworkManager[5970]: [1759658836.2487] manager: (tap6d79d237-7e): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Oct 5 06:07:16 localhost systemd-udevd[332815]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:16 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:07:16 localhost podman[332797]: 2025-10-05 10:07:16.257754192 +0000 UTC m=+0.062246439 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:16 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:16 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:16 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:16.266 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-83f6c22d-dafa-4d15-aac9-039d56a8acf4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f6c22d-dafa-4d15-aac9-039d56a8acf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7a8b2c827254f7f9907084cca2e1db9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa071cce-3858-4fc7-b42e-87a90ce77544, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6d79d237-7ef0-4c47-9774-80abaebb109a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:16 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:16.268 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 6d79d237-7ef0-4c47-9774-80abaebb109a in datapath 83f6c22d-dafa-4d15-aac9-039d56a8acf4 bound to our chassis#033[00m Oct 5 06:07:16 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:16.270 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 83f6c22d-dafa-4d15-aac9-039d56a8acf4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:16 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:16.271 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[742714ef-f3a3-454b-a734-8c3706087c29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost ovn_controller[157556]: 2025-10-05T10:07:16Z|00164|binding|INFO|Setting lport 6d79d237-7ef0-4c47-9774-80abaebb109a ovn-installed in OVS Oct 5 06:07:16 localhost ovn_controller[157556]: 2025-10-05T10:07:16Z|00165|binding|INFO|Setting lport 6d79d237-7ef0-4c47-9774-80abaebb109a up in Southbound Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost nova_compute[297130]: 2025-10-05 10:07:16.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost journal[237639]: ethtool ioctl error on tap6d79d237-7e: No such device Oct 5 06:07:16 localhost nova_compute[297130]: 2025-10-05 10:07:16.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost nova_compute[297130]: 2025-10-05 10:07:16.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:16 localhost nova_compute[297130]: 2025-10-05 10:07:16.365 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:16 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:16.703 2 INFO neutron.agent.securitygroups_rpc [None req-5c25fffb-d48c-4f4b-9ba8-ffa859290a6a 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:16 localhost openstack_network_exporter[250246]: ERROR 10:07:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:16 localhost openstack_network_exporter[250246]: ERROR 10:07:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:16 localhost openstack_network_exporter[250246]: ERROR 10:07:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:07:16 localhost openstack_network_exporter[250246]: ERROR 10:07:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:07:16 localhost openstack_network_exporter[250246]: Oct 5 06:07:16 localhost openstack_network_exporter[250246]: ERROR 10:07:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:07:16 localhost openstack_network_exporter[250246]: Oct 5 06:07:16 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:16.905 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:16Z, description=, device_id=4d1c82d4-a9f8-4cdf-bb7e-82024e7ca3e2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5e6ab3c2-6f1d-4361-986f-b175954664c4, ip_allocation=immediate, mac_address=fa:16:3e:84:49:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:07Z, description=, dns_domain=, id=ba615150-6291-461e-914d-8614dd64d36b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1258594110-network, port_security_enabled=True, project_id=f191af5fd15547479e573ab11c825146, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54982, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1935, status=ACTIVE, subnets=['359e90c1-b6bf-480d-a0f4-90f96378ad57'], tags=[], tenant_id=f191af5fd15547479e573ab11c825146, updated_at=2025-10-05T10:07:08Z, vlan_transparent=None, network_id=ba615150-6291-461e-914d-8614dd64d36b, port_security_enabled=False, project_id=f191af5fd15547479e573ab11c825146, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2019, status=DOWN, tags=[], tenant_id=f191af5fd15547479e573ab11c825146, updated_at=2025-10-05T10:07:16Z on network ba615150-6291-461e-914d-8614dd64d36b#033[00m Oct 5 06:07:17 localhost dnsmasq[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/addn_hosts - 1 addresses Oct 5 06:07:17 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/host Oct 5 06:07:17 localhost podman[332901]: 2025-10-05 10:07:17.093918553 +0000 UTC m=+0.044806206 container kill 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:07:17 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/opts Oct 5 06:07:17 localhost nova_compute[297130]: 2025-10-05 10:07:17.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:17 localhost nova_compute[297130]: 2025-10-05 10:07:17.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:07:17 localhost nova_compute[297130]: 2025-10-05 10:07:17.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:07:17 localhost podman[332932]: Oct 5 06:07:17 localhost podman[332932]: 2025-10-05 10:07:17.28563986 +0000 UTC m=+0.090706740 container create a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:07:17 localhost nova_compute[297130]: 2025-10-05 10:07:17.306 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:07:17 localhost systemd[1]: Started libpod-conmon-a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db.scope. Oct 5 06:07:17 localhost systemd[1]: tmp-crun.bYfwvI.mount: Deactivated successfully. Oct 5 06:07:17 localhost podman[332932]: 2025-10-05 10:07:17.24320807 +0000 UTC m=+0.048274940 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:17 localhost systemd[1]: Started libcrun container. Oct 5 06:07:17 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:17.364 271653 INFO neutron.agent.dhcp.agent [None req-cece651f-cb6e-4e39-9922-8269ab920620 - - - - - -] DHCP configuration for ports {'5e6ab3c2-6f1d-4361-986f-b175954664c4'} is completed#033[00m Oct 5 06:07:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a10f6fbe963cd60818a268397290a508e873890fc2d1fd8132a408e9e68d620/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:17 localhost podman[332932]: 2025-10-05 10:07:17.377280675 +0000 UTC m=+0.182347555 container init a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:07:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:17.385 2 INFO neutron.agent.securitygroups_rpc [None req-5331fe0c-6def-43a6-a28b-e8e96a993e48 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:17 localhost podman[332932]: 2025-10-05 10:07:17.387848982 +0000 UTC m=+0.192915862 container start a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:07:17 localhost dnsmasq[332954]: started, version 2.85 cachesize 150 Oct 5 06:07:17 localhost dnsmasq[332954]: DNS service limited to local subnets Oct 5 06:07:17 localhost dnsmasq[332954]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:17 localhost dnsmasq[332954]: warning: no upstream servers configured Oct 5 06:07:17 localhost dnsmasq-dhcp[332954]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:17 localhost dnsmasq[332954]: read /var/lib/neutron/dhcp/83f6c22d-dafa-4d15-aac9-039d56a8acf4/addn_hosts - 0 addresses Oct 5 06:07:17 localhost dnsmasq-dhcp[332954]: read /var/lib/neutron/dhcp/83f6c22d-dafa-4d15-aac9-039d56a8acf4/host Oct 5 06:07:17 localhost dnsmasq-dhcp[332954]: read /var/lib/neutron/dhcp/83f6c22d-dafa-4d15-aac9-039d56a8acf4/opts Oct 5 06:07:17 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:17.589 271653 INFO neutron.agent.dhcp.agent [None req-d16c87e3-4c92-4a26-8b9a-afd8c245072b - - - - - -] DHCP configuration for ports {'aa210fa4-7d38-40f8-8f41-285f0cc56d87'} is completed#033[00m Oct 5 06:07:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:17.645 2 INFO neutron.agent.securitygroups_rpc [None req-9f4832ee-5d49-487c-96d5-26bed528c76f 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:17.956 2 INFO neutron.agent.securitygroups_rpc [None req-169b8896-0bc5-46ee-959a-0db3f0bfd2ea 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:18 localhost nova_compute[297130]: 2025-10-05 10:07:18.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:18 localhost nova_compute[297130]: 2025-10-05 10:07:18.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:18 localhost nova_compute[297130]: 2025-10-05 10:07:18.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:18 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:18.905 2 INFO neutron.agent.securitygroups_rpc [None req-f90a23ab-2c05-43e9-a68f-6c5dbe010538 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2750110298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2750110298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.282 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.308 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.309 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.309 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.310 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.310 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:07:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:07:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1503706433' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.797 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:07:19 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:19.828 2 INFO neutron.agent.securitygroups_rpc [None req-b4147394-98fe-43d4-98c1-416cb7b976dd 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['d7ff5a8e-9dd4-41ea-8172-eac851557fe5']#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.997 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.998 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11569MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.998 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:19 localhost nova_compute[297130]: 2025-10-05 10:07:19.999 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:20 localhost nova_compute[297130]: 2025-10-05 10:07:20.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:20 localhost nova_compute[297130]: 2025-10-05 10:07:20.322 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:07:20 localhost nova_compute[297130]: 2025-10-05 10:07:20.322 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:07:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:20.405 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:07:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:20.405 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:07:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:20.406 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:20 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:20.468 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:16Z, description=, device_id=4d1c82d4-a9f8-4cdf-bb7e-82024e7ca3e2, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5e6ab3c2-6f1d-4361-986f-b175954664c4, ip_allocation=immediate, mac_address=fa:16:3e:84:49:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:07Z, description=, dns_domain=, id=ba615150-6291-461e-914d-8614dd64d36b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1258594110-network, port_security_enabled=True, project_id=f191af5fd15547479e573ab11c825146, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54982, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1935, status=ACTIVE, subnets=['359e90c1-b6bf-480d-a0f4-90f96378ad57'], tags=[], tenant_id=f191af5fd15547479e573ab11c825146, updated_at=2025-10-05T10:07:08Z, vlan_transparent=None, network_id=ba615150-6291-461e-914d-8614dd64d36b, port_security_enabled=False, project_id=f191af5fd15547479e573ab11c825146, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2019, status=DOWN, tags=[], tenant_id=f191af5fd15547479e573ab11c825146, updated_at=2025-10-05T10:07:16Z on network ba615150-6291-461e-914d-8614dd64d36b#033[00m Oct 5 06:07:20 localhost nova_compute[297130]: 2025-10-05 10:07:20.687 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:07:20 localhost podman[332992]: 2025-10-05 10:07:20.758608921 +0000 UTC m=+0.061709864 container kill 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:07:20 localhost dnsmasq[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/addn_hosts - 1 addresses Oct 5 06:07:20 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/host Oct 5 06:07:20 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/opts Oct 5 06:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:07:20 localhost podman[333009]: 2025-10-05 10:07:20.893488888 +0000 UTC m=+0.108167954 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:07:20 localhost podman[333009]: 2025-10-05 10:07:20.905125813 +0000 UTC m=+0.119804859 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:07:20 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:07:20 localhost systemd[1]: tmp-crun.AcHsy4.mount: Deactivated successfully. Oct 5 06:07:20 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:20.975 271653 INFO neutron.agent.dhcp.agent [None req-76208252-0532-4e59-8cf6-3dbfabbff67b - - - - - -] DHCP configuration for ports {'5e6ab3c2-6f1d-4361-986f-b175954664c4'} is completed#033[00m Oct 5 06:07:20 localhost podman[333007]: 2025-10-05 10:07:20.975810279 +0000 UTC m=+0.194477363 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:20 localhost podman[333007]: 2025-10-05 10:07:20.989173852 +0000 UTC m=+0.207840946 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:07:20 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:07:21 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:21.149 2 INFO neutron.agent.securitygroups_rpc [None req-b9a17396-b928-4b5b-98f0-a472d21cec0b 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['78bf4040-6e9f-4ef0-bd57-023c16739605']#033[00m Oct 5 06:07:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:07:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2074937946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.182 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.188 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.204 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.228 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.229 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:21 localhost nova_compute[297130]: 2025-10-05 10:07:21.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:07:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:22 localhost nova_compute[297130]: 2025-10-05 10:07:22.297 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:22 localhost nova_compute[297130]: 2025-10-05 10:07:22.298 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:07:22 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:22.547 2 INFO neutron.agent.securitygroups_rpc [None req-693ed2f6-d8d5-4791-b6dc-4285cd78eff9 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:23 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:23.514 2 INFO neutron.agent.securitygroups_rpc [None req-6733de2d-2a6b-463e-8b00-30e6441f698f 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['6e6bf508-1e73-4b5c-995d-22056e152d33']#033[00m Oct 5 06:07:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:07:23 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:23.851 2 INFO neutron.agent.securitygroups_rpc [None req-3c31fa34-f2a5-4486-ae28-cc78fe0adcf8 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['6e6bf508-1e73-4b5c-995d-22056e152d33']#033[00m Oct 5 06:07:23 localhost nova_compute[297130]: 2025-10-05 10:07:23.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:23 localhost podman[333073]: 2025-10-05 10:07:23.928838024 +0000 UTC m=+0.091259335 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Oct 5 06:07:23 localhost podman[333073]: 2025-10-05 10:07:23.962465876 +0000 UTC m=+0.124887207 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:23 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:07:24 localhost podman[333074]: 2025-10-05 10:07:24.048229351 +0000 UTC m=+0.208046012 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:24 localhost podman[333074]: 2025-10-05 10:07:24.08509404 +0000 UTC m=+0.244910651 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:07:24 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:07:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:24.410 2 INFO neutron.agent.securitygroups_rpc [None req-2a8be6c9-0c7b-4153-962c-037874c56838 ba8f36397fe34869b1ddea72956496e9 e4fec76d88a14080a1ea7ef01fc37834 - - default default] Security group rule updated ['196a27b9-1ae6-48cd-8927-7a35ed2bb701']#033[00m Oct 5 06:07:25 localhost nova_compute[297130]: 2025-10-05 10:07:25.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:25 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:25.658 2 INFO neutron.agent.securitygroups_rpc [None req-b43bf185-1535-4013-a453-5ca09b3fe0fa 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['0024fe21-bb10-48ff-858e-6966a60efa16']#033[00m Oct 5 06:07:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail Oct 5 06:07:25 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:25.812 2 INFO neutron.agent.securitygroups_rpc [None req-460f44d0-d4bd-49be-83e4-664c20ad77c4 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['0024fe21-bb10-48ff-858e-6966a60efa16']#033[00m Oct 5 06:07:26 localhost podman[248157]: time="2025-10-05T10:07:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:07:26 localhost podman[248157]: @ - - [05/Oct/2025:10:07:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149957 "" "Go-http-client/1.1" Oct 5 06:07:26 localhost podman[248157]: @ - - [05/Oct/2025:10:07:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20272 "" "Go-http-client/1.1" Oct 5 06:07:27 localhost nova_compute[297130]: 2025-10-05 10:07:27.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:27 localhost nova_compute[297130]: 2025-10-05 10:07:27.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:07:27 localhost nova_compute[297130]: 2025-10-05 10:07:27.300 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:07:27 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:27.319 2 INFO neutron.agent.securitygroups_rpc [None req-4a584ae4-1112-479a-ab85-28ab8ab98f6e 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 5 06:07:27 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:27.841 2 INFO neutron.agent.securitygroups_rpc [None req-c5b4d53b-f4bc-4029-b513-8a976632fbb6 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:27 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:27.941 2 INFO neutron.agent.securitygroups_rpc [None req-2139208c-fc5a-4b10-b0b6-a160c9a6ef08 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:28 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:28.488 2 INFO neutron.agent.securitygroups_rpc [None req-84657f08-3891-427d-bba4-4d37014315fa 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:28 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:28.677 271653 INFO neutron.agent.linux.ip_lib [None req-ba6b8eb3-553a-4421-a4f0-b7561cb44fcd - - - - - -] Device tap41ad1065-ec cannot be used as it has no MAC address#033[00m Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost kernel: device tap41ad1065-ec entered promiscuous mode Oct 5 06:07:28 localhost NetworkManager[5970]: [1759658848.7082] manager: (tap41ad1065-ec): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Oct 5 06:07:28 localhost systemd-udevd[333190]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:28 localhost ovn_controller[157556]: 2025-10-05T10:07:28Z|00166|binding|INFO|Claiming lport 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 for this chassis. Oct 5 06:07:28 localhost ovn_controller[157556]: 2025-10-05T10:07:28Z|00167|binding|INFO|41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2: Claiming unknown Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:07:28 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:07:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:07:28 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:07:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:28.728 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-c232cf4f-cedb-4414-ad13-7d12f6d45a5b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c232cf4f-cedb-4414-ad13-7d12f6d45a5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aeb79df06a24441fb7ff0aefdd8f34a4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f217c838-41f8-4c5c-94fe-6e7d7ca4a602, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:28.729 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 in datapath c232cf4f-cedb-4414-ad13-7d12f6d45a5b bound to our chassis#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:28.730 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c232cf4f-cedb-4414-ad13-7d12f6d45a5b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:28.731 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[b155d37f-3a7d-4d45-99a3-713b4143ea67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost ovn_controller[157556]: 2025-10-05T10:07:28Z|00168|binding|INFO|Setting lport 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 ovn-installed in OVS Oct 5 06:07:28 localhost ovn_controller[157556]: 2025-10-05T10:07:28Z|00169|binding|INFO|Setting lport 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 up in Southbound Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 77508ca2-eba5-466e-b65f-5a24b0002b75 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:07:28 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 77508ca2-eba5-466e-b65f-5a24b0002b75 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:07:28 localhost ceph-mgr[301363]: [progress INFO root] Completed event 77508ca2-eba5-466e-b65f-5a24b0002b75 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:07:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:07:28 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost journal[237639]: ethtool ioctl error on tap41ad1065-ec: No such device Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:07:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2211 writes, 23K keys, 2211 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.06 MB/s#012Cumulative WAL: 2211 writes, 2211 syncs, 1.00 writes per sync, written: 0.03 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2211 writes, 23K keys, 2211 commit groups, 1.0 writes per commit group, ingest: 33.80 MB, 0.06 MB/s#012Interval WAL: 2211 writes, 2211 syncs, 1.00 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 172.6 0.15 0.07 9 0.017 0 0 0.0 0.0#012 L6 1/0 13.83 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 4.7 184.4 167.6 0.74 0.35 8 0.093 101K 3961 0.0 0.0#012 Sum 1/0 13.83 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 5.7 153.0 168.4 0.89 0.42 17 0.053 101K 3961 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 5.7 153.3 168.8 0.89 0.42 16 0.056 101K 3961 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 0.0 184.4 167.6 0.74 0.35 8 0.093 101K 3961 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 175.0 0.15 0.07 8 0.019 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.026#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.15 GB write, 0.25 MB/s write, 0.13 GB read, 0.23 MB/s read, 0.9 seconds#012Interval compaction: 0.15 GB write, 0.25 MB/s write, 0.13 GB read, 0.23 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a6bb1350#2 capacity: 308.00 MB usage: 13.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000177 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(704,12.96 MB,4.2077%) FilterBlock(17,301.23 KB,0.0955111%) IndexBlock(17,393.23 KB,0.124681%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 06:07:28 localhost nova_compute[297130]: 2025-10-05 10:07:28.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:28 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:07:28 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:07:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:29 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:29.040 2 INFO neutron.agent.securitygroups_rpc [None req-8bd9ee0f-6143-4175-9c02-390480d4799e 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:29 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:29.335 2 INFO neutron.agent.securitygroups_rpc [None req-476f947b-c764-483a-82cc-ec88b0153aca 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 5 06:07:29 localhost podman[333280]: Oct 5 06:07:29 localhost podman[333280]: 2025-10-05 10:07:29.7495727 +0000 UTC m=+0.092819538 container create 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:07:29 localhost systemd[1]: Started libpod-conmon-3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d.scope. Oct 5 06:07:29 localhost podman[333280]: 2025-10-05 10:07:29.702874534 +0000 UTC m=+0.046121402 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:29 localhost systemd[1]: tmp-crun.qGwZbk.mount: Deactivated successfully. Oct 5 06:07:29 localhost systemd[1]: Started libcrun container. Oct 5 06:07:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8dc5eb9f1baed2deaf1cb2fd9267f8c259eb28c1139757f1c03620613f2ff3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:29 localhost podman[333280]: 2025-10-05 10:07:29.835678925 +0000 UTC m=+0.178925753 container init 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:07:29 localhost podman[333280]: 2025-10-05 10:07:29.846084947 +0000 UTC m=+0.189331775 container start 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:07:29 localhost dnsmasq[333298]: started, version 2.85 cachesize 150 Oct 5 06:07:29 localhost dnsmasq[333298]: DNS service limited to local subnets Oct 5 06:07:29 localhost dnsmasq[333298]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:29 localhost dnsmasq[333298]: warning: no upstream servers configured Oct 5 06:07:29 localhost dnsmasq-dhcp[333298]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:07:29 localhost dnsmasq[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/addn_hosts - 0 addresses Oct 5 06:07:29 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/host Oct 5 06:07:29 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/opts Oct 5 06:07:29 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:29.996 271653 INFO neutron.agent.dhcp.agent [None req-472ec0f2-410b-437a-ae59-9438a78ab29b - - - - - -] DHCP configuration for ports {'073f0484-3cc0-49bc-9f82-fc0ddd9a28d9'} is completed#033[00m Oct 5 06:07:30 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:30.086 2 INFO neutron.agent.securitygroups_rpc [None req-ee2aff28-3b14-4064-8176-7cb1e66e643a 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['deb4f3c9-aada-46a1-bfa9-cc7661c64e37']#033[00m Oct 5 06:07:30 localhost nova_compute[297130]: 2025-10-05 10:07:30.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Oct 5 06:07:31 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:31.708 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:31Z, description=, device_id=0e45d144-0167-4d74-8c28-c342e371761a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6979e6d8-de1e-44e8-94b4-254a353dc1e8, ip_allocation=immediate, mac_address=fa:16:3e:04:33:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2113, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:07:31Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:07:31 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:07:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:07:31 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:31.840 2 INFO neutron.agent.securitygroups_rpc [None req-a750740a-7bbf-4d80-b3fd-3791018d3fe5 6faef6a4f4ba44e18abfbed0c5099371 7cc6b4a02ee84768ba86a5355165c8c9 - - default default] Security group rule updated ['081cd962-3c9a-4af0-bdfc-f1ce8d3a3fe1']#033[00m Oct 5 06:07:31 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:07:31 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:31 localhost podman[333316]: 2025-10-05 10:07:31.937613354 +0000 UTC m=+0.062162707 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:07:31 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:32 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:32.263 271653 INFO neutron.agent.dhcp.agent [None req-ce9a25c0-dace-49e3-8b56-f931bcfef244 - - - - - -] DHCP configuration for ports {'6979e6d8-de1e-44e8-94b4-254a353dc1e8'} is completed#033[00m Oct 5 06:07:32 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:07:33 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:33.074 271653 INFO neutron.agent.linux.ip_lib [None req-6972c651-4d0d-419b-8de7-98e462c76951 - - - - - -] Device tap3882fb04-8b cannot be used as it has no MAC address#033[00m Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost kernel: device tap3882fb04-8b entered promiscuous mode Oct 5 06:07:33 localhost NetworkManager[5970]: [1759658853.1095] manager: (tap3882fb04-8b): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Oct 5 06:07:33 localhost ovn_controller[157556]: 2025-10-05T10:07:33Z|00170|binding|INFO|Claiming lport 3882fb04-8b96-46c2-88b8-7e5a4c9b259f for this chassis. Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost ovn_controller[157556]: 2025-10-05T10:07:33Z|00171|binding|INFO|3882fb04-8b96-46c2-88b8-7e5a4c9b259f: Claiming unknown Oct 5 06:07:33 localhost systemd-udevd[333347]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:33.124 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-be33f40d-88d6-47cb-afd2-1621b1101610', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be33f40d-88d6-47cb-afd2-1621b1101610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53994ea7-01d9-419a-9d2b-f5d50f60cc0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3882fb04-8b96-46c2-88b8-7e5a4c9b259f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:33.126 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 3882fb04-8b96-46c2-88b8-7e5a4c9b259f in datapath be33f40d-88d6-47cb-afd2-1621b1101610 bound to our chassis#033[00m Oct 5 06:07:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:33.128 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network be33f40d-88d6-47cb-afd2-1621b1101610 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:33 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:33.129 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9547da-af0e-457a-9c77-3ea6029aaf02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost ovn_controller[157556]: 2025-10-05T10:07:33Z|00172|binding|INFO|Setting lport 3882fb04-8b96-46c2-88b8-7e5a4c9b259f ovn-installed in OVS Oct 5 06:07:33 localhost ovn_controller[157556]: 2025-10-05T10:07:33Z|00173|binding|INFO|Setting lport 3882fb04-8b96-46c2-88b8-7e5a4c9b259f up in Southbound Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost journal[237639]: ethtool ioctl error on tap3882fb04-8b: No such device Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s Oct 5 06:07:33 localhost nova_compute[297130]: 2025-10-05 10:07:33.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:34 localhost podman[333418]: Oct 5 06:07:34 localhost podman[333418]: 2025-10-05 10:07:34.173112114 +0000 UTC m=+0.093684621 container create 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:07:34 localhost systemd[1]: Started libpod-conmon-8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2.scope. Oct 5 06:07:34 localhost podman[333418]: 2025-10-05 10:07:34.128484524 +0000 UTC m=+0.049057021 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:34 localhost systemd[1]: Started libcrun container. Oct 5 06:07:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9a1c21b171823831ecbd8ebcb638bd1eba4942286b25c33e974fbdf66d63e3d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:34 localhost podman[333418]: 2025-10-05 10:07:34.244927741 +0000 UTC m=+0.165500238 container init 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:07:34 localhost podman[333418]: 2025-10-05 10:07:34.262334153 +0000 UTC m=+0.182906650 container start 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:07:34 localhost dnsmasq[333436]: started, version 2.85 cachesize 150 Oct 5 06:07:34 localhost dnsmasq[333436]: DNS service limited to local subnets Oct 5 06:07:34 localhost dnsmasq[333436]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:34 localhost dnsmasq[333436]: warning: no upstream servers configured Oct 5 06:07:34 localhost dnsmasq-dhcp[333436]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:34 localhost dnsmasq[333436]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 0 addresses Oct 5 06:07:34 localhost dnsmasq-dhcp[333436]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:34 localhost dnsmasq-dhcp[333436]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:34 localhost dnsmasq[332954]: exiting on receipt of SIGTERM Oct 5 06:07:34 localhost podman[333453]: 2025-10-05 10:07:34.473987642 +0000 UTC m=+0.063854983 container kill a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:34 localhost systemd[1]: libpod-a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db.scope: Deactivated successfully. Oct 5 06:07:34 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:34.515 271653 INFO neutron.agent.dhcp.agent [None req-7354ebb4-3804-44ee-af36-2f9bab2e3d49 - - - - - -] DHCP configuration for ports {'4ff8367d-1ec6-4eb1-b537-0292676235c2', '319125ac-c84e-4157-90b9-d51816743f04'} is completed#033[00m Oct 5 06:07:34 localhost podman[333467]: 2025-10-05 10:07:34.551681349 +0000 UTC m=+0.069219028 container died a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:07:34 localhost podman[333467]: 2025-10-05 10:07:34.579987466 +0000 UTC m=+0.097525055 container cleanup a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:34 localhost systemd[1]: libpod-conmon-a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db.scope: Deactivated successfully. Oct 5 06:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:07:34 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:34.601 2 INFO neutron.agent.securitygroups_rpc [None req-7c15e948-0756-4894-9401-16ac3bd80e33 b6eee72daf174482a09538159bfd443d f34fdb6c55c946fcb8470c230a141a31 - - default default] Security group member updated ['99deb70b-a280-4904-b641-029f0268e21a']#033[00m Oct 5 06:07:34 localhost podman[333474]: 2025-10-05 10:07:34.63400623 +0000 UTC m=+0.137328064 container remove a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83f6c22d-dafa-4d15-aac9-039d56a8acf4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true) Oct 5 06:07:34 localhost nova_compute[297130]: 2025-10-05 10:07:34.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:34 localhost kernel: device tap6d79d237-7e left promiscuous mode Oct 5 06:07:34 localhost ovn_controller[157556]: 2025-10-05T10:07:34Z|00174|binding|INFO|Releasing lport 6d79d237-7ef0-4c47-9774-80abaebb109a from this chassis (sb_readonly=0) Oct 5 06:07:34 localhost ovn_controller[157556]: 2025-10-05T10:07:34Z|00175|binding|INFO|Setting lport 6d79d237-7ef0-4c47-9774-80abaebb109a down in Southbound Oct 5 06:07:34 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:34.660 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-83f6c22d-dafa-4d15-aac9-039d56a8acf4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83f6c22d-dafa-4d15-aac9-039d56a8acf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7a8b2c827254f7f9907084cca2e1db9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa071cce-3858-4fc7-b42e-87a90ce77544, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6d79d237-7ef0-4c47-9774-80abaebb109a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:34 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:34.661 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 6d79d237-7ef0-4c47-9774-80abaebb109a in datapath 83f6c22d-dafa-4d15-aac9-039d56a8acf4 unbound from our chassis#033[00m Oct 5 06:07:34 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:34.662 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 83f6c22d-dafa-4d15-aac9-039d56a8acf4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:34 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:34.663 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[01390b01-8541-41f1-913b-c5ba2a03981d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:34 localhost nova_compute[297130]: 2025-10-05 10:07:34.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:34 localhost podman[333492]: 2025-10-05 10:07:34.701848959 +0000 UTC m=+0.089495777 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Oct 5 06:07:34 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:34.733 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=905854c4-2ee3-43d4-aa1a-f2265daec7cc, ip_allocation=immediate, mac_address=fa:16:3e:57:3d:30, name=tempest-RoutersAdminNegativeTest-2143791746, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=True, project_id=f34fdb6c55c946fcb8470c230a141a31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['99deb70b-a280-4904-b641-029f0268e21a'], standard_attr_id=2141, status=DOWN, tags=[], tenant_id=f34fdb6c55c946fcb8470c230a141a31, updated_at=2025-10-05T10:07:34Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:07:34 localhost podman[333492]: 2025-10-05 10:07:34.738349399 +0000 UTC m=+0.125996147 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:34 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:07:34 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 4 addresses Oct 5 06:07:34 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:34 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:34 localhost podman[333544]: 2025-10-05 10:07:34.974238874 +0000 UTC m=+0.064640853 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:07:35 localhost podman[333526]: 2025-10-05 10:07:34.967253276 +0000 UTC m=+0.121081044 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:07:35 localhost podman[333526]: 2025-10-05 10:07:35.054327826 +0000 UTC m=+0.208155554 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:07:35 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:07:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:35.131 271653 INFO neutron.agent.dhcp.agent [None req-664b740b-b654-45d2-aca6-0dde11df08aa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:35.149 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:34Z, description=, device_id=0e45d144-0167-4d74-8c28-c342e371761a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cdd3875a-3f3f-46d7-805d-3462a5a9af81, ip_allocation=immediate, mac_address=fa:16:3e:1b:1d:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:26Z, description=, dns_domain=, id=c232cf4f-cedb-4414-ad13-7d12f6d45a5b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-758629674-network, port_security_enabled=True, project_id=aeb79df06a24441fb7ff0aefdd8f34a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23454, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2081, status=ACTIVE, subnets=['2f058d03-f664-47fd-832a-b13cfb240177'], tags=[], tenant_id=aeb79df06a24441fb7ff0aefdd8f34a4, updated_at=2025-10-05T10:07:27Z, vlan_transparent=None, network_id=c232cf4f-cedb-4414-ad13-7d12f6d45a5b, port_security_enabled=False, project_id=aeb79df06a24441fb7ff0aefdd8f34a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2144, status=DOWN, tags=[], tenant_id=aeb79df06a24441fb7ff0aefdd8f34a4, updated_at=2025-10-05T10:07:35Z on network c232cf4f-cedb-4414-ad13-7d12f6d45a5b#033[00m Oct 5 06:07:35 localhost systemd[1]: tmp-crun.REQQ4p.mount: Deactivated successfully. Oct 5 06:07:35 localhost systemd[1]: var-lib-containers-storage-overlay-6a10f6fbe963cd60818a268397290a508e873890fc2d1fd8132a408e9e68d620-merged.mount: Deactivated successfully. Oct 5 06:07:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8cf76d6ba9ca20cc74366b611437f31ba68756a72143df8a79e4fc9d92e83db-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:35 localhost systemd[1]: run-netns-qdhcp\x2d83f6c22d\x2ddafa\x2d4d15\x2daac9\x2d039d56a8acf4.mount: Deactivated successfully. Oct 5 06:07:35 localhost nova_compute[297130]: 2025-10-05 10:07:35.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:35.263 271653 INFO neutron.agent.dhcp.agent [None req-25ee34cb-e945-4ea7-8a1e-19316272c871 - - - - - -] DHCP configuration for ports {'905854c4-2ee3-43d4-aa1a-f2265daec7cc'} is completed#033[00m Oct 5 06:07:35 localhost podman[333593]: 2025-10-05 10:07:35.403685949 +0000 UTC m=+0.075489018 container kill 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:35 localhost dnsmasq[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/addn_hosts - 1 addresses Oct 5 06:07:35 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/host Oct 5 06:07:35 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/opts Oct 5 06:07:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:35.461 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:35 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:35.500 2 INFO neutron.agent.securitygroups_rpc [None req-8bc4a948-b240-4615-bb13-976b18c48ca9 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['5ec48d99-9389-4c99-9e3e-c175128cae07']#033[00m Oct 5 06:07:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:35.603 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:34Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eede9c4a-d41f-4281-9dff-3536be9b0bf1, ip_allocation=immediate, mac_address=fa:16:3e:d3:e8:27, name=tempest-PortsIpV6TestJSON-533032508, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:25Z, description=, dns_domain=, id=be33f40d-88d6-47cb-afd2-1621b1101610, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1042766090, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37790, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1693, status=ACTIVE, subnets=['7f5ec975-e866-4fe1-81a7-bc07282290e3'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:32Z, vlan_transparent=None, network_id=be33f40d-88d6-47cb-afd2-1621b1101610, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5ec48d99-9389-4c99-9e3e-c175128cae07'], standard_attr_id=2143, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:35Z on network be33f40d-88d6-47cb-afd2-1621b1101610#033[00m Oct 5 06:07:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s Oct 5 06:07:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:35.730 271653 INFO neutron.agent.dhcp.agent [None req-fea49e68-6b36-4985-8591-6616859cf7f0 - - - - - -] DHCP configuration for ports {'cdd3875a-3f3f-46d7-805d-3462a5a9af81'} is completed#033[00m Oct 5 06:07:35 localhost dnsmasq[333436]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 1 addresses Oct 5 06:07:35 localhost dnsmasq-dhcp[333436]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:35 localhost podman[333633]: 2025-10-05 10:07:35.810512578 +0000 UTC m=+0.065233039 container kill 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:35 localhost dnsmasq-dhcp[333436]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:36 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:36.076 271653 INFO neutron.agent.dhcp.agent [None req-ff3960f1-a1a8-4af4-96b8-aa5a3e033172 - - - - - -] DHCP configuration for ports {'eede9c4a-d41f-4281-9dff-3536be9b0bf1'} is completed#033[00m Oct 5 06:07:36 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:36.102 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:36 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:36.330 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:34Z, description=, device_id=0e45d144-0167-4d74-8c28-c342e371761a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cdd3875a-3f3f-46d7-805d-3462a5a9af81, ip_allocation=immediate, mac_address=fa:16:3e:1b:1d:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:26Z, description=, dns_domain=, id=c232cf4f-cedb-4414-ad13-7d12f6d45a5b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-758629674-network, port_security_enabled=True, project_id=aeb79df06a24441fb7ff0aefdd8f34a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23454, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2081, status=ACTIVE, subnets=['2f058d03-f664-47fd-832a-b13cfb240177'], tags=[], tenant_id=aeb79df06a24441fb7ff0aefdd8f34a4, updated_at=2025-10-05T10:07:27Z, vlan_transparent=None, network_id=c232cf4f-cedb-4414-ad13-7d12f6d45a5b, port_security_enabled=False, project_id=aeb79df06a24441fb7ff0aefdd8f34a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2144, status=DOWN, tags=[], tenant_id=aeb79df06a24441fb7ff0aefdd8f34a4, updated_at=2025-10-05T10:07:35Z on network c232cf4f-cedb-4414-ad13-7d12f6d45a5b#033[00m Oct 5 06:07:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:36 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1598406326' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:36 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1598406326' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:36 localhost podman[333669]: 2025-10-05 10:07:36.568134529 +0000 UTC m=+0.066533874 container kill 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:36 localhost dnsmasq[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/addn_hosts - 1 addresses Oct 5 06:07:36 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/host Oct 5 06:07:36 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/opts Oct 5 06:07:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:07:36 localhost systemd[1]: tmp-crun.SdQvpv.mount: Deactivated successfully. Oct 5 06:07:36 localhost podman[333682]: 2025-10-05 10:07:36.687971648 +0000 UTC m=+0.095538071 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm) Oct 5 06:07:36 localhost nova_compute[297130]: 2025-10-05 10:07:36.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:36 localhost podman[333682]: 2025-10-05 10:07:36.75849777 +0000 UTC m=+0.166064203 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7) Oct 5 06:07:36 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:07:36 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:36.943 271653 INFO neutron.agent.dhcp.agent [None req-48d79995-abd5-4451-9d47-b4a29f99fd29 - - - - - -] DHCP configuration for ports {'cdd3875a-3f3f-46d7-805d-3462a5a9af81'} is completed#033[00m Oct 5 06:07:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 43 op/s Oct 5 06:07:38 localhost dnsmasq[333436]: exiting on receipt of SIGTERM Oct 5 06:07:38 localhost systemd[1]: libpod-8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2.scope: Deactivated successfully. Oct 5 06:07:38 localhost podman[333725]: 2025-10-05 10:07:38.164929542 +0000 UTC m=+0.065868497 container kill 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:07:38 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:38.182 2 INFO neutron.agent.securitygroups_rpc [None req-84848f06-9748-4b0b-af05-ee277d570d7d b6eee72daf174482a09538159bfd443d f34fdb6c55c946fcb8470c230a141a31 - - default default] Security group member updated ['99deb70b-a280-4904-b641-029f0268e21a']#033[00m Oct 5 06:07:38 localhost podman[333737]: 2025-10-05 10:07:38.226601754 +0000 UTC m=+0.053496332 container died 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:07:38 localhost podman[333737]: 2025-10-05 10:07:38.263612098 +0000 UTC m=+0.090506666 container cleanup 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:07:38 localhost systemd[1]: libpod-conmon-8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2.scope: Deactivated successfully. Oct 5 06:07:38 localhost podman[333744]: 2025-10-05 10:07:38.33671116 +0000 UTC m=+0.144709405 container remove 8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:07:38 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:07:38 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:38 localhost podman[333783]: 2025-10-05 10:07:38.537291617 +0000 UTC m=+0.070547563 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:38 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:38 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:38.715 2 INFO neutron.agent.securitygroups_rpc [None req-794b5ac2-514f-4cf2-8f6d-60c082ae4422 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72acb41c-3515-430f-8b0d-c6b4b4c48929', '5ec48d99-9389-4c99-9e3e-c175128cae07']#033[00m Oct 5 06:07:38 localhost nova_compute[297130]: 2025-10-05 10:07:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:39 localhost systemd[1]: var-lib-containers-storage-overlay-c9a1c21b171823831ecbd8ebcb638bd1eba4942286b25c33e974fbdf66d63e3d-merged.mount: Deactivated successfully. Oct 5 06:07:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ea4f9469d91dd56cf1bb512bc2217ecd3f2888a45e4cfa91a181332c69a20c2-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:39 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:39.182 2 INFO neutron.agent.securitygroups_rpc [None req-fbb1a0f1-0d63-49d2-920b-22d6e8214efb cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72acb41c-3515-430f-8b0d-c6b4b4c48929']#033[00m Oct 5 06:07:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Oct 5 06:07:39 localhost podman[333853]: Oct 5 06:07:39 localhost podman[333853]: 2025-10-05 10:07:39.813981142 +0000 UTC m=+0.091072851 container create 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:07:39 localhost systemd[1]: Started libpod-conmon-5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1.scope. Oct 5 06:07:39 localhost podman[333853]: 2025-10-05 10:07:39.770837963 +0000 UTC m=+0.047929692 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:39 localhost systemd[1]: Started libcrun container. Oct 5 06:07:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbde84f1017a7ab2432d0b2bba286bc47c27cd7d1b96af15d280ff688d5b7671/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:39 localhost podman[333853]: 2025-10-05 10:07:39.898642307 +0000 UTC m=+0.175734016 container init 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:39 localhost podman[333853]: 2025-10-05 10:07:39.907452156 +0000 UTC m=+0.184543865 container start 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:39 localhost dnsmasq[333871]: started, version 2.85 cachesize 150 Oct 5 06:07:39 localhost dnsmasq[333871]: DNS service limited to local subnets Oct 5 06:07:39 localhost dnsmasq[333871]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:39 localhost dnsmasq[333871]: warning: no upstream servers configured Oct 5 06:07:39 localhost dnsmasq-dhcp[333871]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:07:39 localhost dnsmasq-dhcp[333871]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:39 localhost dnsmasq[333871]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 1 addresses Oct 5 06:07:39 localhost dnsmasq-dhcp[333871]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:39 localhost dnsmasq-dhcp[333871]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:39 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:39.971 271653 INFO neutron.agent.dhcp.agent [None req-a634f8a8-eec7-48f6-a8b9-596391e959c7 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:34Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eede9c4a-d41f-4281-9dff-3536be9b0bf1, ip_allocation=immediate, mac_address=fa:16:3e:d3:e8:27, name=tempest-PortsIpV6TestJSON-1534187717, network_id=be33f40d-88d6-47cb-afd2-1621b1101610, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['72acb41c-3515-430f-8b0d-c6b4b4c48929'], standard_attr_id=2143, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:38Z on network be33f40d-88d6-47cb-afd2-1621b1101610#033[00m Oct 5 06:07:40 localhost dnsmasq[333871]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 1 addresses Oct 5 06:07:40 localhost dnsmasq-dhcp[333871]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:40 localhost dnsmasq-dhcp[333871]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:40 localhost podman[333888]: 2025-10-05 10:07:40.165905144 +0000 UTC m=+0.066797162 container kill 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:07:40 localhost nova_compute[297130]: 2025-10-05 10:07:40.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:40 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:40.276 271653 INFO neutron.agent.dhcp.agent [None req-e0861278-893d-489c-9f38-4e843e809610 - - - - - -] DHCP configuration for ports {'3882fb04-8b96-46c2-88b8-7e5a4c9b259f', 'eede9c4a-d41f-4281-9dff-3536be9b0bf1', '4ff8367d-1ec6-4eb1-b537-0292676235c2', '319125ac-c84e-4157-90b9-d51816743f04'} is completed#033[00m Oct 5 06:07:40 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:40.397 271653 INFO neutron.agent.dhcp.agent [None req-6a5129cb-9341-4d63-ae6a-e5579324ef15 - - - - - -] DHCP configuration for ports {'eede9c4a-d41f-4281-9dff-3536be9b0bf1'} is completed#033[00m Oct 5 06:07:40 localhost dnsmasq[333871]: exiting on receipt of SIGTERM Oct 5 06:07:40 localhost podman[333927]: 2025-10-05 10:07:40.601274237 +0000 UTC m=+0.061384815 container kill 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:07:40 localhost systemd[1]: libpod-5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1.scope: Deactivated successfully. Oct 5 06:07:40 localhost podman[333942]: 2025-10-05 10:07:40.679080797 +0000 UTC m=+0.058785815 container died 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:07:40 localhost podman[333942]: 2025-10-05 10:07:40.717931581 +0000 UTC m=+0.097636509 container cleanup 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:40 localhost systemd[1]: libpod-conmon-5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1.scope: Deactivated successfully. Oct 5 06:07:40 localhost podman[333943]: 2025-10-05 10:07:40.756460285 +0000 UTC m=+0.131744183 container remove 5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:07:41 localhost nova_compute[297130]: 2025-10-05 10:07:41.068 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:07:41 localhost systemd[1]: var-lib-containers-storage-overlay-fbde84f1017a7ab2432d0b2bba286bc47c27cd7d1b96af15d280ff688d5b7671-merged.mount: Deactivated successfully. Oct 5 06:07:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bd5d2d6e48eb2d81d9bdaf028a554138e8428ba98cdf9508e9bd6de39967dd1-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:07:41 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3116703772' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:07:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:07:41 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3116703772' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:07:41 localhost podman[334021]: Oct 5 06:07:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:07:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:07:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:07:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:07:41 localhost podman[334021]: 2025-10-05 10:07:41.669045677 +0000 UTC m=+0.091312676 container create bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:07:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Oct 5 06:07:41 localhost systemd[1]: Started libpod-conmon-bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4.scope. Oct 5 06:07:41 localhost podman[334021]: 2025-10-05 10:07:41.628343214 +0000 UTC m=+0.050610233 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:07:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:07:41 localhost systemd[1]: Started libcrun container. Oct 5 06:07:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afccca7a5f5b903a78d4a2db634eb87eaec500320dafebe52e8ff14e4a2c9ab2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:41 localhost podman[334021]: 2025-10-05 10:07:41.750908297 +0000 UTC m=+0.173175306 container init bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:07:41 localhost podman[334021]: 2025-10-05 10:07:41.761002351 +0000 UTC m=+0.183269360 container start bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:41 localhost dnsmasq[334040]: started, version 2.85 cachesize 150 Oct 5 06:07:41 localhost dnsmasq[334040]: DNS service limited to local subnets Oct 5 06:07:41 localhost dnsmasq[334040]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:41 localhost dnsmasq[334040]: warning: no upstream servers configured Oct 5 06:07:41 localhost dnsmasq-dhcp[334040]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:07:41 localhost dnsmasq[334040]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 0 addresses Oct 5 06:07:41 localhost dnsmasq-dhcp[334040]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:41 localhost dnsmasq-dhcp[334040]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:41 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:41.852 271653 INFO neutron.agent.linux.ip_lib [None req-d8352944-7432-4dd8-9a52-8c26bd4f3192 - - - - - -] Device tap6bcaa5aa-9c cannot be used as it has no MAC address#033[00m Oct 5 06:07:41 localhost nova_compute[297130]: 2025-10-05 10:07:41.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:41 localhost kernel: device tap6bcaa5aa-9c entered promiscuous mode Oct 5 06:07:41 localhost NetworkManager[5970]: [1759658861.8807] manager: (tap6bcaa5aa-9c): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Oct 5 06:07:41 localhost nova_compute[297130]: 2025-10-05 10:07:41.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:41 localhost ovn_controller[157556]: 2025-10-05T10:07:41Z|00176|binding|INFO|Claiming lport 6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac for this chassis. Oct 5 06:07:41 localhost ovn_controller[157556]: 2025-10-05T10:07:41Z|00177|binding|INFO|6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac: Claiming unknown Oct 5 06:07:41 localhost systemd-udevd[334050]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:07:41 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:41.894 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-31ea1d93-95c6-4900-b239-e9f9ea6a016b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31ea1d93-95c6-4900-b239-e9f9ea6a016b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cedc436004b98b4d6a7b8317517ef', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4084cef0-adb6-455b-9413-4ccc06202d1a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:41 localhost ovn_controller[157556]: 2025-10-05T10:07:41Z|00178|binding|INFO|Setting lport 6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac ovn-installed in OVS Oct 5 06:07:41 localhost ovn_controller[157556]: 2025-10-05T10:07:41Z|00179|binding|INFO|Setting lport 6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac up in Southbound Oct 5 06:07:41 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:41.896 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac in datapath 31ea1d93-95c6-4900-b239-e9f9ea6a016b bound to our chassis#033[00m Oct 5 06:07:41 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:41.898 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 31ea1d93-95c6-4900-b239-e9f9ea6a016b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:41 localhost nova_compute[297130]: 2025-10-05 10:07:41.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:41 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:41.899 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5adee7-6566-4d65-be46-efc2d9d7b4bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:41 localhost nova_compute[297130]: 2025-10-05 10:07:41.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:41 localhost nova_compute[297130]: 2025-10-05 10:07:41.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:42 localhost nova_compute[297130]: 2025-10-05 10:07:42.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:42 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:42.014 271653 INFO neutron.agent.dhcp.agent [None req-fe4208ed-1ea6-40b6-bc5b-d2a138947713 - - - - - -] DHCP configuration for ports {'3882fb04-8b96-46c2-88b8-7e5a4c9b259f', '4ff8367d-1ec6-4eb1-b537-0292676235c2', '319125ac-c84e-4157-90b9-d51816743f04'} is completed#033[00m Oct 5 06:07:42 localhost dnsmasq[334040]: exiting on receipt of SIGTERM Oct 5 06:07:42 localhost podman[334079]: 2025-10-05 10:07:42.117856886 +0000 UTC m=+0.053840431 container kill bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:07:42 localhost systemd[1]: libpod-bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4.scope: Deactivated successfully. Oct 5 06:07:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:07:42 localhost podman[334097]: 2025-10-05 10:07:42.199704365 +0000 UTC m=+0.060798399 container died bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:07:42 localhost systemd[1]: tmp-crun.C9cI3M.mount: Deactivated successfully. Oct 5 06:07:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:42 localhost podman[334117]: 2025-10-05 10:07:42.275619734 +0000 UTC m=+0.088897782 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Oct 5 06:07:42 localhost podman[334097]: 2025-10-05 10:07:42.308093884 +0000 UTC m=+0.169187888 container remove bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:07:42 localhost systemd[1]: libpod-conmon-bcc9e867f9d7f540558e1b6ad20c79f012634dd8619792051833018c62877aa4.scope: Deactivated successfully. Oct 5 06:07:42 localhost podman[334117]: 2025-10-05 10:07:42.311216398 +0000 UTC m=+0.124494456 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:07:42 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:07:42 localhost podman[334189]: Oct 5 06:07:42 localhost podman[334189]: 2025-10-05 10:07:42.938851515 +0000 UTC m=+0.095203852 container create 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 06:07:42 localhost systemd[1]: Started libpod-conmon-57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10.scope. Oct 5 06:07:42 localhost podman[334189]: 2025-10-05 10:07:42.892522109 +0000 UTC m=+0.048874436 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:42 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:42.994 2 INFO neutron.agent.securitygroups_rpc [None req-7ace4b82-0cc4-4084-83e7-ac763f44855c cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['7afb8f56-9b39-4524-946c-98857ae057ed']#033[00m Oct 5 06:07:43 localhost systemd[1]: Started libcrun container. Oct 5 06:07:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fdc1b2718eef190c2068061a62b8d5847b9dc61c34bf2a7aae03db3f4ca4a0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:43 localhost podman[334189]: 2025-10-05 10:07:43.017978761 +0000 UTC m=+0.174331058 container init 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:07:43 localhost podman[334189]: 2025-10-05 10:07:43.027687004 +0000 UTC m=+0.184039301 container start 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:43 localhost dnsmasq[334219]: started, version 2.85 cachesize 150 Oct 5 06:07:43 localhost dnsmasq[334219]: DNS service limited to local subnets Oct 5 06:07:43 localhost dnsmasq[334219]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:43 localhost dnsmasq[334219]: warning: no upstream servers configured Oct 5 06:07:43 localhost dnsmasq-dhcp[334219]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:07:43 localhost dnsmasq[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/addn_hosts - 0 addresses Oct 5 06:07:43 localhost dnsmasq-dhcp[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/host Oct 5 06:07:43 localhost dnsmasq-dhcp[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/opts Oct 5 06:07:43 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:43.074 2 INFO neutron.agent.securitygroups_rpc [None req-df770e99-c016-4090-bd54-f5a4e9cc943d 70cea673858c4ca7a047572a65bd009d ca6cedc436004b98b4d6a7b8317517ef - - default default] Security group member updated ['b57cacfc-2c28-480a-b1eb-ffe3c939d72c']#033[00m Oct 5 06:07:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:43.129 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=090d93ad-5def-464e-8e9b-8fe6fa6ed96c, ip_allocation=immediate, mac_address=fa:16:3e:5e:e4:be, name=tempest-TagsExtTest-1641614630, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:07:38Z, description=, dns_domain=, id=31ea1d93-95c6-4900-b239-e9f9ea6a016b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-1546876332, port_security_enabled=True, project_id=ca6cedc436004b98b4d6a7b8317517ef, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54553, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['8658a9f3-b4a3-41eb-b096-88d88ab1f284'], tags=[], tenant_id=ca6cedc436004b98b4d6a7b8317517ef, updated_at=2025-10-05T10:07:40Z, vlan_transparent=None, network_id=31ea1d93-95c6-4900-b239-e9f9ea6a016b, port_security_enabled=True, project_id=ca6cedc436004b98b4d6a7b8317517ef, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b57cacfc-2c28-480a-b1eb-ffe3c939d72c'], standard_attr_id=2189, status=DOWN, tags=[], tenant_id=ca6cedc436004b98b4d6a7b8317517ef, updated_at=2025-10-05T10:07:42Z on network 31ea1d93-95c6-4900-b239-e9f9ea6a016b#033[00m Oct 5 06:07:43 localhost systemd[1]: var-lib-containers-storage-overlay-afccca7a5f5b903a78d4a2db634eb87eaec500320dafebe52e8ff14e4a2c9ab2-merged.mount: Deactivated successfully. Oct 5 06:07:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:43.195 271653 INFO neutron.agent.dhcp.agent [None req-7cc46b6a-3b60-4c66-89f0-f63929c2cdac - - - - - -] DHCP configuration for ports {'826f64e0-530d-4081-9b5b-a5f0e1bf5467'} is completed#033[00m Oct 5 06:07:43 localhost systemd[1]: tmp-crun.ELyu7c.mount: Deactivated successfully. Oct 5 06:07:43 localhost dnsmasq[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/addn_hosts - 1 addresses Oct 5 06:07:43 localhost dnsmasq-dhcp[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/host Oct 5 06:07:43 localhost dnsmasq-dhcp[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/opts Oct 5 06:07:43 localhost podman[334246]: 2025-10-05 10:07:43.374924848 +0000 UTC m=+0.072487566 container kill 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:07:43 localhost podman[334285]: Oct 5 06:07:43 localhost podman[334285]: 2025-10-05 10:07:43.571379565 +0000 UTC m=+0.091636925 container create 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:07:43 localhost systemd[1]: Started libpod-conmon-507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1.scope. Oct 5 06:07:43 localhost systemd[1]: Started libcrun container. Oct 5 06:07:43 localhost podman[334285]: 2025-10-05 10:07:43.524477873 +0000 UTC m=+0.044735233 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f8bdb7dae58eace555b20350a69e54964d3b3b89e1c6ff1cf1724d6f43a1f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:43 localhost podman[334285]: 2025-10-05 10:07:43.634840315 +0000 UTC m=+0.155097695 container init 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:43.639 271653 INFO neutron.agent.dhcp.agent [None req-89f86cf0-3a36-4fcc-9ab9-ec16ee117bd4 - - - - - -] DHCP configuration for ports {'090d93ad-5def-464e-8e9b-8fe6fa6ed96c'} is completed#033[00m Oct 5 06:07:43 localhost podman[334285]: 2025-10-05 10:07:43.646242305 +0000 UTC m=+0.166499665 container start 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:07:43 localhost dnsmasq[334307]: started, version 2.85 cachesize 150 Oct 5 06:07:43 localhost dnsmasq[334307]: DNS service limited to local subnets Oct 5 06:07:43 localhost dnsmasq[334307]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:43 localhost dnsmasq[334307]: warning: no upstream servers configured Oct 5 06:07:43 localhost dnsmasq-dhcp[334307]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:07:43 localhost dnsmasq-dhcp[334307]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:43 localhost dnsmasq[334307]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 0 addresses Oct 5 06:07:43 localhost dnsmasq-dhcp[334307]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:43 localhost dnsmasq-dhcp[334307]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.1 KiB/s wr, 44 op/s Oct 5 06:07:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:43.709 271653 INFO neutron.agent.dhcp.agent [None req-88343ac9-0d42-4e76-8bf6-ea5eb337ff10 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c514a814-b6ef-40d3-a931-1f90bbf66ed1, ip_allocation=immediate, mac_address=fa:16:3e:c4:b6:36, name=tempest-PortsIpV6TestJSON-1166019042, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T10:06:25Z, description=, dns_domain=, id=be33f40d-88d6-47cb-afd2-1621b1101610, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1042766090, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37790, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=1693, status=ACTIVE, subnets=['756ab225-4cd3-41bb-b7d2-d9d78daebae2', 'f1fd5d76-deb7-42e0-9675-4a55b89b0263'], tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:40Z, vlan_transparent=None, network_id=be33f40d-88d6-47cb-afd2-1621b1101610, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7afb8f56-9b39-4524-946c-98857ae057ed'], standard_attr_id=2190, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:42Z on network be33f40d-88d6-47cb-afd2-1621b1101610#033[00m Oct 5 06:07:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:43.872 271653 INFO neutron.agent.dhcp.agent [None req-521ddbef-0ad1-4c79-9d8a-94bc60877c9c - - - - - -] DHCP configuration for ports {'3882fb04-8b96-46c2-88b8-7e5a4c9b259f', '4ff8367d-1ec6-4eb1-b537-0292676235c2', '319125ac-c84e-4157-90b9-d51816743f04'} is completed#033[00m Oct 5 06:07:43 localhost dnsmasq[334307]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 1 addresses Oct 5 06:07:43 localhost podman[334326]: 2025-10-05 10:07:43.901918186 +0000 UTC m=+0.063857462 container kill 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:07:43 localhost dnsmasq-dhcp[334307]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:43 localhost dnsmasq-dhcp[334307]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:43 localhost nova_compute[297130]: 2025-10-05 10:07:43.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:44 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:44.206 271653 INFO neutron.agent.dhcp.agent [None req-4e3f217c-7045-46f5-922e-f8d9ee2ce835 - - - - - -] DHCP configuration for ports {'c514a814-b6ef-40d3-a931-1f90bbf66ed1'} is completed#033[00m Oct 5 06:07:44 localhost dnsmasq[334307]: exiting on receipt of SIGTERM Oct 5 06:07:44 localhost podman[334366]: 2025-10-05 10:07:44.690499917 +0000 UTC m=+0.059397171 container kill 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:44 localhost systemd[1]: libpod-507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1.scope: Deactivated successfully. Oct 5 06:07:44 localhost podman[334383]: 2025-10-05 10:07:44.76328402 +0000 UTC m=+0.052049632 container died 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 5 06:07:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:44 localhost systemd[1]: var-lib-containers-storage-overlay-c8f8bdb7dae58eace555b20350a69e54964d3b3b89e1c6ff1cf1724d6f43a1f6-merged.mount: Deactivated successfully. Oct 5 06:07:44 localhost podman[334383]: 2025-10-05 10:07:44.804061805 +0000 UTC m=+0.092827387 container remove 507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:07:44 localhost systemd[1]: libpod-conmon-507862b9a024fc8d0e498e8ad327f44c99db28db171c1316d4e6f9c45b5ee1f1.scope: Deactivated successfully. Oct 5 06:07:45 localhost nova_compute[297130]: 2025-10-05 10:07:45.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:45.435 2 INFO neutron.agent.securitygroups_rpc [None req-1b97a73b-3735-424d-b9eb-71862dbcdb8c cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['8348d8b9-8edc-450f-bfc6-3a50148b401f', '82b4f83b-cc5b-4542-abf5-d19bf0c21453', '7afb8f56-9b39-4524-946c-98857ae057ed']#033[00m Oct 5 06:07:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 145 MiB data, 770 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s Oct 5 06:07:45 localhost podman[334459]: Oct 5 06:07:46 localhost podman[334459]: 2025-10-05 10:07:46.007140094 +0000 UTC m=+0.086827865 container create adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:07:46 localhost systemd[1]: Started libpod-conmon-adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b.scope. Oct 5 06:07:46 localhost systemd[1]: Started libcrun container. Oct 5 06:07:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c6fe074ff0292c9b91476f6274d4856ff6f32fee76c0eddc2ed2735ef92ba6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:46 localhost podman[334459]: 2025-10-05 10:07:45.967033846 +0000 UTC m=+0.046721627 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:46 localhost podman[334459]: 2025-10-05 10:07:46.067711956 +0000 UTC m=+0.147399707 container init adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:07:46 localhost podman[334459]: 2025-10-05 10:07:46.077213614 +0000 UTC m=+0.156901365 container start adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:07:46 localhost dnsmasq[334478]: started, version 2.85 cachesize 150 Oct 5 06:07:46 localhost dnsmasq[334478]: DNS service limited to local subnets Oct 5 06:07:46 localhost dnsmasq[334478]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:46 localhost dnsmasq[334478]: warning: no upstream servers configured Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: DHCPv6, static leases only on 2001:db8::, lease time 1d Oct 5 06:07:46 localhost dnsmasq[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 1 addresses Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:46.143 271653 INFO neutron.agent.dhcp.agent [None req-a4b71a04-725e-4064-88eb-9d32baaa5a08 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:07:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c514a814-b6ef-40d3-a931-1f90bbf66ed1, ip_allocation=immediate, mac_address=fa:16:3e:c4:b6:36, name=tempest-PortsIpV6TestJSON-1397783991, network_id=be33f40d-88d6-47cb-afd2-1621b1101610, port_security_enabled=True, project_id=3399a1ea839f4cce84fcedf3190ff04b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['82b4f83b-cc5b-4542-abf5-d19bf0c21453', '8348d8b9-8edc-450f-bfc6-3a50148b401f'], standard_attr_id=2190, status=DOWN, tags=[], tenant_id=3399a1ea839f4cce84fcedf3190ff04b, updated_at=2025-10-05T10:07:44Z on network be33f40d-88d6-47cb-afd2-1621b1101610#033[00m Oct 5 06:07:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:46.149 2 INFO neutron.agent.securitygroups_rpc [None req-049b9b58-6a79-440b-bc0e-124f0a306f57 cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['8348d8b9-8edc-450f-bfc6-3a50148b401f', '82b4f83b-cc5b-4542-abf5-d19bf0c21453']#033[00m Oct 5 06:07:46 localhost dnsmasq[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 1 addresses Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:46 localhost podman[334498]: 2025-10-05 10:07:46.33819612 +0000 UTC m=+0.058281082 container kill adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2) Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:46.725 271653 INFO neutron.agent.dhcp.agent [None req-86a3154b-8dd6-4035-9414-ab0f8508d762 - - - - - -] DHCP configuration for ports {'3882fb04-8b96-46c2-88b8-7e5a4c9b259f', 'c514a814-b6ef-40d3-a931-1f90bbf66ed1', '4ff8367d-1ec6-4eb1-b537-0292676235c2', '319125ac-c84e-4157-90b9-d51816743f04'} is completed#033[00m Oct 5 06:07:46 localhost openstack_network_exporter[250246]: ERROR 10:07:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:46 localhost openstack_network_exporter[250246]: ERROR 10:07:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:07:46 localhost openstack_network_exporter[250246]: ERROR 10:07:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:07:46 localhost openstack_network_exporter[250246]: ERROR 10:07:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:07:46 localhost openstack_network_exporter[250246]: Oct 5 06:07:46 localhost openstack_network_exporter[250246]: ERROR 10:07:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:07:46 localhost openstack_network_exporter[250246]: Oct 5 06:07:46 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:46.848 271653 INFO neutron.agent.dhcp.agent [None req-e773e44e-9ba0-44aa-a336-7fc96249cca4 - - - - - -] DHCP configuration for ports {'c514a814-b6ef-40d3-a931-1f90bbf66ed1'} is completed#033[00m Oct 5 06:07:46 localhost dnsmasq[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 0 addresses Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:46 localhost dnsmasq-dhcp[334478]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:46 localhost podman[334535]: 2025-10-05 10:07:46.94714759 +0000 UTC m=+0.064491530 container kill adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:07:47 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:47.584 2 INFO neutron.agent.securitygroups_rpc [None req-f98984bf-27f2-402b-bf53-d253fe79e1e2 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.6 KiB/s wr, 71 op/s Oct 5 06:07:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e150 e150: 6 total, 6 up, 6 in Oct 5 06:07:48 localhost dnsmasq[334478]: exiting on receipt of SIGTERM Oct 5 06:07:48 localhost systemd[1]: tmp-crun.Jb5Y4t.mount: Deactivated successfully. Oct 5 06:07:48 localhost podman[334573]: 2025-10-05 10:07:48.2355014 +0000 UTC m=+0.064193951 container kill adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:07:48 localhost systemd[1]: libpod-adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b.scope: Deactivated successfully. Oct 5 06:07:48 localhost podman[334585]: 2025-10-05 10:07:48.308060318 +0000 UTC m=+0.056924764 container died adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:07:48 localhost podman[334585]: 2025-10-05 10:07:48.342752538 +0000 UTC m=+0.091616944 container cleanup adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:48 localhost systemd[1]: libpod-conmon-adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b.scope: Deactivated successfully. Oct 5 06:07:48 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:48.359 2 INFO neutron.agent.securitygroups_rpc [None req-cc4f7d40-abaa-4d93-8603-d8c24366c18f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:48 localhost podman[334587]: 2025-10-05 10:07:48.384268284 +0000 UTC m=+0.126647905 container remove adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:07:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:48 localhost nova_compute[297130]: 2025-10-05 10:07:48.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:48 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:48.984 2 INFO neutron.agent.securitygroups_rpc [None req-d4fd844c-5319-47bb-bde6-6036c467bd6b cb9d54cf786444a6a77a1980f4a1f3ac 3399a1ea839f4cce84fcedf3190ff04b - - default default] Security group member updated ['72863814-32f3-4006-a64f-d6dada584ee1']#033[00m Oct 5 06:07:49 localhost podman[334665]: Oct 5 06:07:49 localhost podman[334665]: 2025-10-05 10:07:49.224437553 +0000 UTC m=+0.092255033 container create 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:07:49 localhost systemd[1]: var-lib-containers-storage-overlay-b2c6fe074ff0292c9b91476f6274d4856ff6f32fee76c0eddc2ed2735ef92ba6-merged.mount: Deactivated successfully. Oct 5 06:07:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adcb6f4d2559d2140d1c701f6c3d3e1383479a95f07edaceffa415e2327faf5b-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:49 localhost systemd[1]: Started libpod-conmon-90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3.scope. Oct 5 06:07:49 localhost podman[334665]: 2025-10-05 10:07:49.181114309 +0000 UTC m=+0.048931819 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:07:49 localhost systemd[1]: Started libcrun container. Oct 5 06:07:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9033241c26c7038f8a35cf04244f1c419c805ddbeb9d5b834c3890ea33b4f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:07:49 localhost podman[334665]: 2025-10-05 10:07:49.297928445 +0000 UTC m=+0.165745965 container init 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:07:49 localhost podman[334665]: 2025-10-05 10:07:49.310693991 +0000 UTC m=+0.178511471 container start 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:49 localhost dnsmasq[334683]: started, version 2.85 cachesize 150 Oct 5 06:07:49 localhost dnsmasq[334683]: DNS service limited to local subnets Oct 5 06:07:49 localhost dnsmasq[334683]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:07:49 localhost dnsmasq[334683]: warning: no upstream servers configured Oct 5 06:07:49 localhost dnsmasq-dhcp[334683]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Oct 5 06:07:49 localhost dnsmasq-dhcp[334683]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Oct 5 06:07:49 localhost dnsmasq[334683]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/addn_hosts - 0 addresses Oct 5 06:07:49 localhost dnsmasq-dhcp[334683]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/host Oct 5 06:07:49 localhost dnsmasq-dhcp[334683]: read /var/lib/neutron/dhcp/be33f40d-88d6-47cb-afd2-1621b1101610/opts Oct 5 06:07:49 localhost ovn_controller[157556]: 2025-10-05T10:07:49Z|00180|binding|INFO|Removing iface tap3882fb04-8b ovn-installed in OVS Oct 5 06:07:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:49.425 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a1d1272d-1a38-4cb6-acc9-06789551553d with type ""#033[00m Oct 5 06:07:49 localhost ovn_controller[157556]: 2025-10-05T10:07:49Z|00181|binding|INFO|Removing lport 3882fb04-8b96-46c2-88b8-7e5a4c9b259f ovn-installed in OVS Oct 5 06:07:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:49.427 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8:0:2::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-be33f40d-88d6-47cb-afd2-1621b1101610', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be33f40d-88d6-47cb-afd2-1621b1101610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3399a1ea839f4cce84fcedf3190ff04b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53994ea7-01d9-419a-9d2b-f5d50f60cc0e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3882fb04-8b96-46c2-88b8-7e5a4c9b259f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:49 localhost nova_compute[297130]: 2025-10-05 10:07:49.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:49.430 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 3882fb04-8b96-46c2-88b8-7e5a4c9b259f in datapath be33f40d-88d6-47cb-afd2-1621b1101610 unbound from our chassis#033[00m Oct 5 06:07:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:49.432 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network be33f40d-88d6-47cb-afd2-1621b1101610 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:07:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:49.433 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9c06534f-599f-4fbc-91f5-d29402af4d82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:49 localhost nova_compute[297130]: 2025-10-05 10:07:49.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:49.567 271653 INFO neutron.agent.dhcp.agent [None req-b036c7d0-60da-47a1-9366-5f1d73575082 - - - - - -] DHCP configuration for ports {'3882fb04-8b96-46c2-88b8-7e5a4c9b259f', '4ff8367d-1ec6-4eb1-b537-0292676235c2', '319125ac-c84e-4157-90b9-d51816743f04'} is completed#033[00m Oct 5 06:07:49 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:49.643 2 INFO neutron.agent.securitygroups_rpc [None req-a1813b30-646a-4cbf-957a-44cbfadeea3e 70cea673858c4ca7a047572a65bd009d ca6cedc436004b98b4d6a7b8317517ef - - default default] Security group member updated ['b57cacfc-2c28-480a-b1eb-ffe3c939d72c']#033[00m Oct 5 06:07:49 localhost dnsmasq[334683]: exiting on receipt of SIGTERM Oct 5 06:07:49 localhost podman[334701]: 2025-10-05 10:07:49.683560191 +0000 UTC m=+0.063954245 container kill 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS) Oct 5 06:07:49 localhost systemd[1]: libpod-90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3.scope: Deactivated successfully. Oct 5 06:07:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 53 op/s Oct 5 06:07:49 localhost podman[334716]: 2025-10-05 10:07:49.759728016 +0000 UTC m=+0.059918085 container died 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:49 localhost podman[334716]: 2025-10-05 10:07:49.793628615 +0000 UTC m=+0.093818584 container cleanup 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:49 localhost systemd[1]: libpod-conmon-90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3.scope: Deactivated successfully. Oct 5 06:07:49 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:49.805 2 INFO neutron.agent.securitygroups_rpc [None req-a7fc5318-33ca-4417-b8ae-a9518ec1d261 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:49 localhost podman[334717]: 2025-10-05 10:07:49.912750386 +0000 UTC m=+0.208418073 container remove 90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-be33f40d-88d6-47cb-afd2-1621b1101610, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Oct 5 06:07:49 localhost nova_compute[297130]: 2025-10-05 10:07:49.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:49 localhost nova_compute[297130]: 2025-10-05 10:07:49.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:49 localhost kernel: device tap3882fb04-8b left promiscuous mode Oct 5 06:07:49 localhost nova_compute[297130]: 2025-10-05 10:07:49.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:49 localhost dnsmasq[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/addn_hosts - 0 addresses Oct 5 06:07:49 localhost dnsmasq-dhcp[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/host Oct 5 06:07:49 localhost dnsmasq-dhcp[334219]: read /var/lib/neutron/dhcp/31ea1d93-95c6-4900-b239-e9f9ea6a016b/opts Oct 5 06:07:49 localhost podman[334758]: 2025-10-05 10:07:49.947903138 +0000 UTC m=+0.112895371 container kill 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:49.971 271653 INFO neutron.agent.dhcp.agent [None req-87c9a7bb-ab5d-4ce0-908b-e0c3ad6fbe23 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:49.972 271653 INFO neutron.agent.dhcp.agent [None req-87c9a7bb-ab5d-4ce0-908b-e0c3ad6fbe23 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:49.973 271653 INFO neutron.agent.dhcp.agent [None req-87c9a7bb-ab5d-4ce0-908b-e0c3ad6fbe23 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:50 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e151 e151: 6 total, 6 up, 6 in Oct 5 06:07:50 localhost nova_compute[297130]: 2025-10-05 10:07:50.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:50 localhost systemd[1]: var-lib-containers-storage-overlay-ed9033241c26c7038f8a35cf04244f1c419c805ddbeb9d5b834c3890ea33b4f5-merged.mount: Deactivated successfully. Oct 5 06:07:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90005c740ba5ac72713e2ceb97419215b1dabe2559de8754f21cacb11c0843b3-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:50 localhost systemd[1]: run-netns-qdhcp\x2dbe33f40d\x2d88d6\x2d47cb\x2dafd2\x2d1621b1101610.mount: Deactivated successfully. Oct 5 06:07:50 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:50.350 2 INFO neutron.agent.securitygroups_rpc [None req-e3301919-1a61-48e0-baea-7f35439c8826 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:50 localhost podman[334794]: 2025-10-05 10:07:50.42412918 +0000 UTC m=+0.062177247 container kill 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:07:50 localhost dnsmasq[334219]: exiting on receipt of SIGTERM Oct 5 06:07:50 localhost systemd[1]: libpod-57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10.scope: Deactivated successfully. Oct 5 06:07:50 localhost podman[334808]: 2025-10-05 10:07:50.483372327 +0000 UTC m=+0.044554079 container died 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:07:50 localhost ovn_controller[157556]: 2025-10-05T10:07:50Z|00182|binding|INFO|Removing iface tap6bcaa5aa-9c ovn-installed in OVS Oct 5 06:07:50 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:50.508 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 7a6603d5-9138-4b31-8e21-7e16f241b2eb with type ""#033[00m Oct 5 06:07:50 localhost ovn_controller[157556]: 2025-10-05T10:07:50Z|00183|binding|INFO|Removing lport 6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac ovn-installed in OVS Oct 5 06:07:50 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:50.509 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-31ea1d93-95c6-4900-b239-e9f9ea6a016b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31ea1d93-95c6-4900-b239-e9f9ea6a016b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cedc436004b98b4d6a7b8317517ef', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4084cef0-adb6-455b-9413-4ccc06202d1a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:50 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:50.511 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 6bcaa5aa-9cb3-421e-88aa-5e3aaf4a1eac in datapath 31ea1d93-95c6-4900-b239-e9f9ea6a016b unbound from our chassis#033[00m Oct 5 06:07:50 localhost nova_compute[297130]: 2025-10-05 10:07:50.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:50 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:50.516 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31ea1d93-95c6-4900-b239-e9f9ea6a016b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:50 localhost nova_compute[297130]: 2025-10-05 10:07:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:50 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:50.517 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[eadefeb0-a8b4-47aa-a2ba-f7537ff56c8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:50 localhost podman[334808]: 2025-10-05 10:07:50.52036669 +0000 UTC m=+0.081548402 container cleanup 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001) Oct 5 06:07:50 localhost systemd[1]: libpod-conmon-57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10.scope: Deactivated successfully. Oct 5 06:07:50 localhost podman[334809]: 2025-10-05 10:07:50.599768462 +0000 UTC m=+0.152875735 container remove 57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-31ea1d93-95c6-4900-b239-e9f9ea6a016b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:07:50 localhost nova_compute[297130]: 2025-10-05 10:07:50.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:50 localhost kernel: device tap6bcaa5aa-9c left promiscuous mode Oct 5 06:07:50 localhost nova_compute[297130]: 2025-10-05 10:07:50.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:50 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:50.635 271653 INFO neutron.agent.dhcp.agent [None req-52d0c00c-0bd4-44e7-a372-daf7d1fa53d8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:50 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:50.796 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e152 e152: 6 total, 6 up, 6 in Oct 5 06:07:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:07:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:07:51 localhost podman[334835]: 2025-10-05 10:07:51.177352142 +0000 UTC m=+0.089737134 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 06:07:51 localhost podman[334835]: 2025-10-05 10:07:51.191175636 +0000 UTC m=+0.103560678 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd) Oct 5 06:07:51 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:07:51 localhost systemd[1]: var-lib-containers-storage-overlay-9fdc1b2718eef190c2068061a62b8d5847b9dc61c34bf2a7aae03db3f4ca4a0b-merged.mount: Deactivated successfully. Oct 5 06:07:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57c69ba1db83ab014a9012be508c6f3d976cde6ac01964be02d8e82040695c10-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:51 localhost systemd[1]: run-netns-qdhcp\x2d31ea1d93\x2d95c6\x2d4900\x2db239\x2de9f9ea6a016b.mount: Deactivated successfully. Oct 5 06:07:51 localhost podman[334836]: 2025-10-05 10:07:51.280304243 +0000 UTC m=+0.188295236 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:07:51 localhost podman[334836]: 2025-10-05 10:07:51.289843491 +0000 UTC m=+0.197834484 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:07:51 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:07:51 localhost nova_compute[297130]: 2025-10-05 10:07:51.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.3 KiB/s wr, 57 op/s Oct 5 06:07:51 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:51.773 2 INFO neutron.agent.securitygroups_rpc [None req-baee9750-4fae-4383-899a-441f63f595a9 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:52 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:52.214 2 INFO neutron.agent.securitygroups_rpc [None req-9451ac2e-b360-457e-98f5-d6f0be03282d f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:52 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e153 e153: 6 total, 6 up, 6 in Oct 5 06:07:53 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:53.500 2 INFO neutron.agent.securitygroups_rpc [None req-2fc340b3-3c1a-47ed-be01-f4e92de921f9 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:53 localhost dnsmasq[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/addn_hosts - 0 addresses Oct 5 06:07:53 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/host Oct 5 06:07:53 localhost dnsmasq-dhcp[332719]: read /var/lib/neutron/dhcp/ba615150-6291-461e-914d-8614dd64d36b/opts Oct 5 06:07:53 localhost podman[334892]: 2025-10-05 10:07:53.501317542 +0000 UTC m=+0.058817376 container kill 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:07:53 localhost nova_compute[297130]: 2025-10-05 10:07:53.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:53 localhost kernel: device tap87cdc8bc-ef left promiscuous mode Oct 5 06:07:53 localhost ovn_controller[157556]: 2025-10-05T10:07:53Z|00184|binding|INFO|Releasing lport 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a from this chassis (sb_readonly=0) Oct 5 06:07:53 localhost ovn_controller[157556]: 2025-10-05T10:07:53Z|00185|binding|INFO|Setting lport 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a down in Southbound Oct 5 06:07:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 4.9 KiB/s wr, 120 op/s Oct 5 06:07:53 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:53.712 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-ba615150-6291-461e-914d-8614dd64d36b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba615150-6291-461e-914d-8614dd64d36b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f191af5fd15547479e573ab11c825146', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=025f706c-92d7-4ab9-940b-451c21f74524, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:53 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:53.714 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 87cdc8bc-efb0-406f-8e72-04c0c7c8ef8a in datapath ba615150-6291-461e-914d-8614dd64d36b unbound from our chassis#033[00m Oct 5 06:07:53 localhost nova_compute[297130]: 2025-10-05 10:07:53.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:53 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:53.719 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba615150-6291-461e-914d-8614dd64d36b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:53 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:53.719 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[13b63ad8-98ab-4cde-9d32-eb4d3a015027]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:53 localhost nova_compute[297130]: 2025-10-05 10:07:53.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:54 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:54.066 2 INFO neutron.agent.securitygroups_rpc [None req-6a71a440-fbf3-4537-abe8-a7eff2851fe0 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:07:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:07:54 localhost podman[334916]: 2025-10-05 10:07:54.90908704 +0000 UTC m=+0.080529014 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Oct 5 06:07:54 localhost podman[334916]: 2025-10-05 10:07:54.923203932 +0000 UTC m=+0.094645876 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:07:54 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:07:55 localhost podman[334917]: 2025-10-05 10:07:55.016225425 +0000 UTC m=+0.182819458 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:07:55 localhost podman[334917]: 2025-10-05 10:07:55.110953713 +0000 UTC m=+0.277547766 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3) Oct 5 06:07:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e154 e154: 6 total, 6 up, 6 in Oct 5 06:07:55 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:07:55 localhost nova_compute[297130]: 2025-10-05 10:07:55.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 4.9 KiB/s wr, 120 op/s Oct 5 06:07:55 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:07:55 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:07:55 localhost podman[334977]: 2025-10-05 10:07:55.736965956 +0000 UTC m=+0.077978715 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:07:55 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:07:55 localhost nova_compute[297130]: 2025-10-05 10:07:55.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:55 localhost systemd[1]: tmp-crun.dcVzzv.mount: Deactivated successfully. Oct 5 06:07:55 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:55.919 2 INFO neutron.agent.securitygroups_rpc [None req-1d269ddf-7645-4d24-b33d-d166450d0e87 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:56 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:56.005 2 INFO neutron.agent.securitygroups_rpc [None req-03f5ff11-97cf-4122-b429-a6f11423915d f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:56 localhost podman[248157]: time="2025-10-05T10:07:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:07:56 localhost podman[248157]: @ - - [05/Oct/2025:10:07:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149964 "" "Go-http-client/1.1" Oct 5 06:07:56 localhost podman[248157]: @ - - [05/Oct/2025:10:07:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20274 "" "Go-http-client/1.1" Oct 5 06:07:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e155 e155: 6 total, 6 up, 6 in Oct 5 06:07:56 localhost dnsmasq[332719]: exiting on receipt of SIGTERM Oct 5 06:07:56 localhost podman[335012]: 2025-10-05 10:07:56.507422285 +0000 UTC m=+0.053705697 container kill 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:56 localhost systemd[1]: libpod-48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc.scope: Deactivated successfully. Oct 5 06:07:56 localhost podman[335024]: 2025-10-05 10:07:56.544425968 +0000 UTC m=+0.027406724 container died 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:07:56 localhost systemd[1]: tmp-crun.D1ShzD.mount: Deactivated successfully. Oct 5 06:07:56 localhost podman[335024]: 2025-10-05 10:07:56.583832897 +0000 UTC m=+0.066813633 container cleanup 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:07:56 localhost systemd[1]: libpod-conmon-48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc.scope: Deactivated successfully. Oct 5 06:07:56 localhost podman[335032]: 2025-10-05 10:07:56.665214573 +0000 UTC m=+0.134032585 container remove 48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba615150-6291-461e-914d-8614dd64d36b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:07:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e156 e156: 6 total, 6 up, 6 in Oct 5 06:07:56 localhost systemd[1]: var-lib-containers-storage-overlay-b6f7ae6c7d41fd299d0cb05acc8e501524bebb57650ce902e98d5a0e29632750-merged.mount: Deactivated successfully. Oct 5 06:07:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48b4650462695ae1909319ba700edc2cb981de97eebfa4e3cd6fde8a028839bc-userdata-shm.mount: Deactivated successfully. Oct 5 06:07:56 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:56.915 271653 INFO neutron.agent.dhcp.agent [None req-f4c89b4f-4162-47c7-82db-240d12a9a080 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:56 localhost systemd[1]: run-netns-qdhcp\x2dba615150\x2d6291\x2d461e\x2d914d\x2d8614dd64d36b.mount: Deactivated successfully. Oct 5 06:07:56 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:56.920 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:57 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:57.057 2 INFO neutron.agent.securitygroups_rpc [None req-b7385107-6494-4a61-8dc0-25a9aaea9b2e f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:57 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:07:57.138 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:07:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 4.9 KiB/s wr, 92 op/s Oct 5 06:07:58 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:58.083 2 INFO neutron.agent.securitygroups_rpc [None req-53177ba6-7660-4de8-8fe1-b74b9593f398 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e157 e157: 6 total, 6 up, 6 in Oct 5 06:07:58 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:58.172 2 INFO neutron.agent.securitygroups_rpc [None req-53177ba6-7660-4de8-8fe1-b74b9593f398 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:58 localhost dnsmasq[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/addn_hosts - 0 addresses Oct 5 06:07:58 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/host Oct 5 06:07:58 localhost dnsmasq-dhcp[333298]: read /var/lib/neutron/dhcp/c232cf4f-cedb-4414-ad13-7d12f6d45a5b/opts Oct 5 06:07:58 localhost podman[335070]: 2025-10-05 10:07:58.463924911 +0000 UTC m=+0.063439190 container kill 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2) Oct 5 06:07:58 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:58.561 2 INFO neutron.agent.securitygroups_rpc [None req-97e921cc-a522-4545-a96d-289c7bae0092 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:07:58 localhost ovn_controller[157556]: 2025-10-05T10:07:58Z|00186|binding|INFO|Releasing lport 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 from this chassis (sb_readonly=0) Oct 5 06:07:58 localhost ovn_controller[157556]: 2025-10-05T10:07:58Z|00187|binding|INFO|Setting lport 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 down in Southbound Oct 5 06:07:58 localhost kernel: device tap41ad1065-ec left promiscuous mode Oct 5 06:07:58 localhost nova_compute[297130]: 2025-10-05 10:07:58.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:58 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:58.638 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-c232cf4f-cedb-4414-ad13-7d12f6d45a5b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c232cf4f-cedb-4414-ad13-7d12f6d45a5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aeb79df06a24441fb7ff0aefdd8f34a4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f217c838-41f8-4c5c-94fe-6e7d7ca4a602, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:07:58 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:58.640 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 41ad1065-ecb0-44a1-a7f7-5f9ad70b33c2 in datapath c232cf4f-cedb-4414-ad13-7d12f6d45a5b unbound from our chassis#033[00m Oct 5 06:07:58 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:58.643 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c232cf4f-cedb-4414-ad13-7d12f6d45a5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:07:58 localhost ovn_metadata_agent[163196]: 2025-10-05 10:07:58.646 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[42451017-420b-47cc-afdb-ed9ae5807a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:07:58 localhost nova_compute[297130]: 2025-10-05 10:07:58.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:58 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:58.775 2 INFO neutron.agent.securitygroups_rpc [None req-f66b887a-8f8a-485a-bab5-6f8586c7acf0 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:07:58 localhost nova_compute[297130]: 2025-10-05 10:07:58.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:07:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e158 e158: 6 total, 6 up, 6 in Oct 5 06:07:59 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:59.233 2 INFO neutron.agent.securitygroups_rpc [None req-0763796b-a68e-4290-9057-fc9d8ab18a1b f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:07:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 6.0 KiB/s wr, 113 op/s Oct 5 06:07:59 localhost neutron_sriov_agent[264647]: 2025-10-05 10:07:59.720 2 INFO neutron.agent.securitygroups_rpc [None req-b1991b49-5b7a-43f4-b492-528ba8afa1c6 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:00 localhost nova_compute[297130]: 2025-10-05 10:08:00.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:00 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e159 e159: 6 total, 6 up, 6 in Oct 5 06:08:00 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:00.643 2 INFO neutron.agent.securitygroups_rpc [None req-e18d5d70-b13c-4e2c-a6cd-c80f95b5c7d7 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:00 localhost nova_compute[297130]: 2025-10-05 10:08:00.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:00 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:08:00 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:08:00 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:08:00 localhost systemd[1]: tmp-crun.Omk69i.mount: Deactivated successfully. Oct 5 06:08:00 localhost podman[335109]: 2025-10-05 10:08:00.689521994 +0000 UTC m=+0.058075245 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001) Oct 5 06:08:01 localhost podman[335145]: 2025-10-05 10:08:01.160825163 +0000 UTC m=+0.065154828 container kill 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Oct 5 06:08:01 localhost dnsmasq[333298]: exiting on receipt of SIGTERM Oct 5 06:08:01 localhost systemd[1]: libpod-3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d.scope: Deactivated successfully. Oct 5 06:08:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e160 e160: 6 total, 6 up, 6 in Oct 5 06:08:01 localhost podman[335159]: 2025-10-05 10:08:01.239797493 +0000 UTC m=+0.061801936 container died 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:08:01 localhost podman[335159]: 2025-10-05 10:08:01.273630641 +0000 UTC m=+0.095635054 container cleanup 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Oct 5 06:08:01 localhost systemd[1]: libpod-conmon-3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d.scope: Deactivated successfully. Oct 5 06:08:01 localhost podman[335166]: 2025-10-05 10:08:01.330978036 +0000 UTC m=+0.141632091 container remove 3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c232cf4f-cedb-4414-ad13-7d12f6d45a5b, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:08:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:08:01.659 271653 INFO neutron.agent.dhcp.agent [None req-3bc9ee3c-b047-40f3-abd0-1595d4416407 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:08:01.660 271653 INFO neutron.agent.dhcp.agent [None req-3bc9ee3c-b047-40f3-abd0-1595d4416407 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:01 localhost systemd[1]: var-lib-containers-storage-overlay-b8dc5eb9f1baed2deaf1cb2fd9267f8c259eb28c1139757f1c03620613f2ff3f-merged.mount: Deactivated successfully. Oct 5 06:08:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c94a7ab3c6163e413e8fb389f931eea575629c38a09576f7cdf8d16af7e5d9d-userdata-shm.mount: Deactivated successfully. Oct 5 06:08:01 localhost systemd[1]: run-netns-qdhcp\x2dc232cf4f\x2dcedb\x2d4414\x2dad13\x2d7d12f6d45a5b.mount: Deactivated successfully. Oct 5 06:08:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 145 MiB data, 774 MiB used, 41 GiB / 42 GiB avail Oct 5 06:08:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e161 e161: 6 total, 6 up, 6 in Oct 5 06:08:01 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:01.797 2 INFO neutron.agent.securitygroups_rpc [None req-7b95b62b-bd84-486c-bb4f-6a16b0729e5e f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:01 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:08:01.949 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:08:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e162 e162: 6 total, 6 up, 6 in Oct 5 06:08:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 145 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 193 KiB/s rd, 11 KiB/s wr, 260 op/s Oct 5 06:08:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:04 localhost nova_compute[297130]: 2025-10-05 10:08:04.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:04 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2764921543' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:04 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2764921543' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:08:04 localhost podman[335188]: 2025-10-05 10:08:04.914032242 +0000 UTC m=+0.080938675 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 06:08:04 localhost podman[335188]: 2025-10-05 10:08:04.928069203 +0000 UTC m=+0.094975676 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:08:04 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:08:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:05.147 2 INFO neutron.agent.securitygroups_rpc [None req-1aee9f1b-2dcb-448d-9738-dfd913b37f5e 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:05 localhost nova_compute[297130]: 2025-10-05 10:08:05.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:05.234 2 INFO neutron.agent.securitygroups_rpc [None req-cde486e8-5343-463f-82b5-fd49f79dd99f 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['ce138a34-e48c-4963-b3e5-a739b99229fc']#033[00m Oct 5 06:08:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:05.680 2 INFO neutron.agent.securitygroups_rpc [None req-defe0441-322c-41c6-b4e1-52a9b37ef436 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['ce138a34-e48c-4963-b3e5-a739b99229fc']#033[00m Oct 5 06:08:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 145 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 7.8 KiB/s wr, 189 op/s Oct 5 06:08:05 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:05.762 2 INFO neutron.agent.securitygroups_rpc [None req-f76a30a8-838b-4e9c-b656-b89caacfd59e 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:08:05 localhost podman[335207]: 2025-10-05 10:08:05.910242842 +0000 UTC m=+0.078193361 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:08:05 localhost podman[335207]: 2025-10-05 10:08:05.919104553 +0000 UTC m=+0.087055052 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:08:05 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:08:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e163 e163: 6 total, 6 up, 6 in Oct 5 06:08:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:08:06 localhost podman[335229]: 2025-10-05 10:08:06.913166263 +0000 UTC m=+0.076416393 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git) Oct 5 06:08:06 localhost podman[335229]: 2025-10-05 10:08:06.927030519 +0000 UTC m=+0.090280649 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=) Oct 5 06:08:06 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:08:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:07.069 2 INFO neutron.agent.securitygroups_rpc [None req-9f58c4be-87fa-4490-a743-6e6879b0c41b 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:07.322 2 INFO neutron.agent.securitygroups_rpc [None req-9850b512-ef78-4a81-a9e5-043b1902da93 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:07.463 2 INFO neutron.agent.securitygroups_rpc [None req-528b96f1-eb86-4d53-a25e-09c868ee3534 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 145 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 8.7 KiB/s wr, 233 op/s Oct 5 06:08:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:07.811 2 INFO neutron.agent.securitygroups_rpc [None req-144298d6-1015-4861-bc90-9514ae262cb1 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:08.116 2 INFO neutron.agent.securitygroups_rpc [None req-2ba3cc32-e760-4e9c-ae49-d16c10977ee3 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:08.513 2 INFO neutron.agent.securitygroups_rpc [None req-75847f43-d349-4d47-aac8-c32b688fcaaf 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:08.899 2 INFO neutron.agent.securitygroups_rpc [None req-597813ba-8c7b-483a-9702-9c6315f34eb1 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:09 localhost nova_compute[297130]: 2025-10-05 10:08:09.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:09 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:09.190 2 INFO neutron.agent.securitygroups_rpc [None req-f12db9b7-ef7e-487f-94ef-0f8cc2068c7f 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:09 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:09.544 2 INFO neutron.agent.securitygroups_rpc [None req-14148e70-70fe-4caa-9845-3c7fbf64477d 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 145 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 6.5 KiB/s wr, 175 op/s Oct 5 06:08:09 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:09.818 2 INFO neutron.agent.securitygroups_rpc [None req-2ac582cd-0625-449c-b0ff-bc862ac98f6d 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['cbc1d89f-3acb-4ff1-8037-35599e686f81']#033[00m Oct 5 06:08:10 localhost nova_compute[297130]: 2025-10-05 10:08:10.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:10 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:10.950 2 INFO neutron.agent.securitygroups_rpc [None req-e45b1d7d-8924-460e-a1d3-88a484cb60d9 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['537a11fb-699e-467e-a2b8-e26ed1f1f5c6']#033[00m Oct 5 06:08:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:08:11 Oct 5 06:08:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:08:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:08:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['images', 'backups', 'manila_metadata', 'manila_data', 'volumes', 'vms', '.mgr'] Oct 5 06:08:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:08:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:08:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:08:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:08:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:08:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 145 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 1.1 KiB/s wr, 42 op/s Oct 5 06:08:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:08:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e164 e164: 6 total, 6 up, 6 in Oct 5 06:08:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021774090359203424 quantized to 32 (current 32) Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:08:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:08:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:08:12 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:12.778 2 INFO neutron.agent.securitygroups_rpc [None req-6201c560-0d97-401f-bed6-27a3a279e37c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['6c86b567-a80c-4484-8582-b269952d5c98']#033[00m Oct 5 06:08:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:08:12 localhost podman[335249]: 2025-10-05 10:08:12.933927233 +0000 UTC m=+0.102605403 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 06:08:12 localhost podman[335249]: 2025-10-05 10:08:12.93934831 +0000 UTC m=+0.108026450 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:12 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:08:13 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:13.091 2 INFO neutron.agent.securitygroups_rpc [None req-9e8e82c4-0a2c-42e9-a046-19758c10d04a 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['6c86b567-a80c-4484-8582-b269952d5c98']#033[00m Oct 5 06:08:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 95 op/s Oct 5 06:08:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e165 e165: 6 total, 6 up, 6 in Oct 5 06:08:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:14 localhost nova_compute[297130]: 2025-10-05 10:08:14.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:14 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:14.958 2 INFO neutron.agent.securitygroups_rpc [None req-85222545-f8d6-422b-a67e-44c69923b0cd 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:15 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:15.065 2 INFO neutron.agent.securitygroups_rpc [None req-de1b1e72-8a6c-4b7b-b7af-986bf55b22e2 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['58dad359-5800-4e6b-8895-59e7fd2c651a']#033[00m Oct 5 06:08:15 localhost nova_compute[297130]: 2025-10-05 10:08:15.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:15 localhost nova_compute[297130]: 2025-10-05 10:08:15.289 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:15 localhost nova_compute[297130]: 2025-10-05 10:08:15.290 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:15 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:15.373 2 INFO neutron.agent.securitygroups_rpc [None req-de3a123d-6c60-4f41-ac0d-b90ea96acf15 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:15 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:15.380 2 INFO neutron.agent.securitygroups_rpc [None req-7cf0dae4-4090-4a92-956d-ad2afe4a64bb 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['58dad359-5800-4e6b-8895-59e7fd2c651a']#033[00m Oct 5 06:08:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:15 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3974298807' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:15 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3974298807' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 51 op/s Oct 5 06:08:15 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:15.944 2 INFO neutron.agent.securitygroups_rpc [None req-49621add-f870-4536-93e0-3a0d1f472d8b 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:16 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:16.087 2 INFO neutron.agent.securitygroups_rpc [None req-9b29eb26-2b22-4613-b3b7-4a016dc2a02c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:16 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2622393805' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:16 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2622393805' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:16 localhost nova_compute[297130]: 2025-10-05 10:08:16.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:16 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:16.206 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:16 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:16.208 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:08:16 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:16.482 2 INFO neutron.agent.securitygroups_rpc [None req-ab5f092d-dcda-4c6f-8450-ab56670675a0 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:16 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:16.507 2 INFO neutron.agent.securitygroups_rpc [None req-cf6909d2-23cc-49f5-96ca-6c9380a21fff 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:16 localhost openstack_network_exporter[250246]: ERROR 10:08:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:08:16 localhost openstack_network_exporter[250246]: ERROR 10:08:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:16 localhost openstack_network_exporter[250246]: ERROR 10:08:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:16 localhost openstack_network_exporter[250246]: ERROR 10:08:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:08:16 localhost openstack_network_exporter[250246]: Oct 5 06:08:16 localhost openstack_network_exporter[250246]: ERROR 10:08:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:08:16 localhost openstack_network_exporter[250246]: Oct 5 06:08:16 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:16.850 2 INFO neutron.agent.securitygroups_rpc [None req-07930e6e-c307-44e9-a139-d00dd44cc84c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:17.258 2 INFO neutron.agent.securitygroups_rpc [None req-973a305c-8af9-4ec1-94f7-2edc593eba32 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost nova_compute[297130]: 2025-10-05 10:08:17.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:17 localhost nova_compute[297130]: 2025-10-05 10:08:17.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:17 localhost nova_compute[297130]: 2025-10-05 10:08:17.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:08:17 localhost nova_compute[297130]: 2025-10-05 10:08:17.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:08:17 localhost nova_compute[297130]: 2025-10-05 10:08:17.294 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:17.420 2 INFO neutron.agent.securitygroups_rpc [None req-6754f8fa-755c-46fd-a6ca-e0f398a31797 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:17.661 2 INFO neutron.agent.securitygroups_rpc [None req-04379d75-12fa-4c5c-bf8a-78d937b944a7 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 192 MiB data, 861 MiB used, 41 GiB / 42 GiB avail; 176 KiB/s rd, 2.7 MiB/s wr, 248 op/s Oct 5 06:08:17 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:17.916 2 INFO neutron.agent.securitygroups_rpc [None req-4dac2c99-1702-43da-bde0-d18a60eeb90c 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['7e2785aa-0ba6-4c0e-a85f-2b39d6c59d70']#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:17.961 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:17.963 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:17.965 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:17 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:17.966 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[c23efc1a-d0bd-4d40-9738-8df0dc5f922a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:18 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:18.184 2 INFO neutron.agent.securitygroups_rpc [None req-fa3e679b-0b2a-4fe6-b41b-ad06e38f79b9 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:18 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:18.480 2 INFO neutron.agent.securitygroups_rpc [None req-74735a9a-80e5-4814-860d-f9b2a32232ca 0db80e9dfba74245967c3bde42355cd2 5936e634b08e422289f0d2afb771b54f - - default default] Security group rule updated ['26da3659-ebd2-47ce-a46d-c888047f4570']#033[00m Oct 5 06:08:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e166 e166: 6 total, 6 up, 6 in Oct 5 06:08:18 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:18.947 2 INFO neutron.agent.securitygroups_rpc [None req-fdf57d7a-2c8a-4d86-a812-b647fef0ddda f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:19 localhost nova_compute[297130]: 2025-10-05 10:08:19.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 192 MiB data, 861 MiB used, 41 GiB / 42 GiB avail; 177 KiB/s rd, 2.7 MiB/s wr, 249 op/s Oct 5 06:08:19 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:19.728 2 INFO neutron.agent.securitygroups_rpc [None req-abce5ea6-68db-4abb-9930-124c328ad3d9 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:20 localhost nova_compute[297130]: 2025-10-05 10:08:20.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:20 localhost nova_compute[297130]: 2025-10-05 10:08:20.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:20 localhost nova_compute[297130]: 2025-10-05 10:08:20.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:20.406 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:20.407 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:08:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:20.407 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:08:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4266057669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4266057669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:21.210 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.289 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.290 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.290 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.290 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.291 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:21.411 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:21.412 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:21.413 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:21.414 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fd3212-5177-4340-85ea-7c526aa2918c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 192 MiB data, 861 MiB used, 41 GiB / 42 GiB avail; 146 KiB/s rd, 7.0 KiB/s wr, 197 op/s Oct 5 06:08:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:08:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4185696337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.736 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:08:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:08:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:08:21 localhost podman[335289]: 2025-10-05 10:08:21.914982214 +0000 UTC m=+0.076739973 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd) Oct 5 06:08:21 localhost podman[335289]: 2025-10-05 10:08:21.95323941 +0000 UTC m=+0.114997209 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.955 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.956 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11562MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.956 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:08:21 localhost nova_compute[297130]: 2025-10-05 10:08:21.957 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:08:21 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:08:22 localhost podman[335290]: 2025-10-05 10:08:22.025289924 +0000 UTC m=+0.183944589 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.031 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.032 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:08:22 localhost podman[335290]: 2025-10-05 10:08:22.032935231 +0000 UTC m=+0.191589886 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:08:22 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.048 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:08:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2039624101' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2039624101' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.089 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.090 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.108 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.137 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.169 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:08:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:08:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3922442199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.626 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.631 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.643 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.659 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:08:22 localhost nova_compute[297130]: 2025-10-05 10:08:22.659 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:22.723 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:22.725 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:22.727 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:22.728 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[b65d0733-3731-4087-9d8b-67ec1d86ad26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:23 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:23.169 2 INFO neutron.agent.securitygroups_rpc [None req-d6921d14-97db-4869-b366-00d629b9750a f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:23.329 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:92:4c 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1c05772-11ce-49c5-9c9a-89c03bd89305, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=da243c3c-62f4-42e5-a938-23cdc589ecbd) old=Port_Binding(mac=['fa:16:3e:ca:92:4c 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57b1a27a-3bbd-4de3-add9-1f79ae5d5e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:23.332 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port da243c3c-62f4-42e5-a938-23cdc589ecbd in datapath 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20 updated#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:23.335 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57b1a27a-3bbd-4de3-add9-1f79ae5d5e20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:23 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:23.336 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[70b41e83-42f7-4dcb-bba7-d396c79b0ed9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:23 localhost nova_compute[297130]: 2025-10-05 10:08:23.659 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 175 KiB/s rd, 7.6 KiB/s wr, 234 op/s Oct 5 06:08:23 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:23.824 2 INFO neutron.agent.securitygroups_rpc [None req-d51aca8d-da7c-4704-906f-6931aa168f0c f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:24.005 2 INFO neutron.agent.securitygroups_rpc [None req-a1ba6929-3c17-4bbf-a2e7-fdef375ea348 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:24 localhost nova_compute[297130]: 2025-10-05 10:08:24.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:24 localhost nova_compute[297130]: 2025-10-05 10:08:24.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:08:24 localhost nova_compute[297130]: 2025-10-05 10:08:24.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:08:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:24.604 2 INFO neutron.agent.securitygroups_rpc [None req-c63ba15e-4b1c-43e5-b818-071f4bd9ad21 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:08:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:08:25 localhost podman[335354]: 2025-10-05 10:08:25.154994358 +0000 UTC m=+0.095931611 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:08:25 localhost podman[335354]: 2025-10-05 10:08:25.165047111 +0000 UTC m=+0.105984364 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:08:25 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:08:25 localhost systemd[1]: tmp-crun.TVq4RR.mount: Deactivated successfully. Oct 5 06:08:25 localhost podman[335372]: 2025-10-05 10:08:25.253237572 +0000 UTC m=+0.090228977 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:08:25 localhost nova_compute[297130]: 2025-10-05 10:08:25.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:25 localhost podman[335372]: 2025-10-05 10:08:25.344191308 +0000 UTC m=+0.181182733 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller) Oct 5 06:08:25 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:08:25 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:25.525 2 INFO neutron.agent.securitygroups_rpc [None req-e0f71c4c-b824-4813-9d2e-f9400293976a 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 7.6 KiB/s wr, 233 op/s Oct 5 06:08:26 localhost podman[248157]: time="2025-10-05T10:08:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:08:26 localhost podman[248157]: @ - - [05/Oct/2025:10:08:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:08:26 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:26.049 2 INFO neutron.agent.securitygroups_rpc [None req-27332e57-044b-4d32-bccf-cfbbc12d6206 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:26 localhost podman[248157]: @ - - [05/Oct/2025:10:08:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19343 "" "Go-http-client/1.1" Oct 5 06:08:26 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:26.844 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:26 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:26.845 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:26 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:26.847 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:26 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:26.848 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[87282b1f-bd83-4ca2-a542-78ebfc404418]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 2.0 KiB/s wr, 75 op/s Oct 5 06:08:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:28.836 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:28.838 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:28.840 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:28 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:28.841 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a20b15-4aac-4498-8a95-af8c752930ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:29 localhost nova_compute[297130]: 2025-10-05 10:08:29.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 1.8 KiB/s wr, 69 op/s Oct 5 06:08:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:08:29 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:08:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:08:29 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:08:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:08:30 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev fb0d5107-3918-4406-85d7-fe5f4ed9213e (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:08:30 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev fb0d5107-3918-4406-85d7-fe5f4ed9213e (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:08:30 localhost ceph-mgr[301363]: [progress INFO root] Completed event fb0d5107-3918-4406-85d7-fe5f4ed9213e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:08:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:08:30 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:08:30 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:30.301 2 INFO neutron.agent.securitygroups_rpc [None req-2eb29681-e6ef-46ba-8164-f72576369c13 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:30 localhost nova_compute[297130]: 2025-10-05 10:08:30.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:30 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:08:30 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:08:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e167 e167: 6 total, 6 up, 6 in Oct 5 06:08:31 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:31.059 2 INFO neutron.agent.securitygroups_rpc [None req-be59150d-24ad-45ae-bfd9-19a1a83e055f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:31 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:31.161 2 INFO neutron.agent.securitygroups_rpc [None req-7d247693-29e2-4c27-86b9-d6a3c432c80d 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:31 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:31.646 2 INFO neutron.agent.securitygroups_rpc [None req-39bd4488-6c14-403b-b8d2-d9ce2663591d 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 2.0 KiB/s wr, 75 op/s Oct 5 06:08:31 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:08:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:08:32 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:08:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:32.330 2 INFO neutron.agent.securitygroups_rpc [None req-5ea7cfba-1518-4eca-931c-3192665c5231 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:32.688 2 INFO neutron.agent.securitygroups_rpc [None req-bf81a202-147c-434d-a64b-f623ced280eb 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:32.958 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.3 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:32.960 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:32.962 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:32 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:32.963 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb401ed-1cc9-4dd6-ab21-4ddc3467bcc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e168 e168: 6 total, 6 up, 6 in Oct 5 06:08:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 43 op/s Oct 5 06:08:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:34 localhost nova_compute[297130]: 2025-10-05 10:08:34.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e169 e169: 6 total, 6 up, 6 in Oct 5 06:08:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:35.157 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:35.159 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:35.161 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:35.162 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[5be23e81-68b2-4546-87bc-e647627c4bc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:35 localhost nova_compute[297130]: 2025-10-05 10:08:35.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 192 MiB data, 882 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 57 op/s Oct 5 06:08:35 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:35.821 2 INFO neutron.agent.securitygroups_rpc [None req-4fe56172-abeb-4404-b26f-e7876d89e5d1 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:08:35 localhost podman[335486]: 2025-10-05 10:08:35.932046084 +0000 UTC m=+0.093571448 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Oct 5 06:08:35 localhost podman[335486]: 2025-10-05 10:08:35.943957076 +0000 UTC m=+0.105482450 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:08:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:08:35 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:08:36 localhost podman[335505]: 2025-10-05 10:08:36.060106456 +0000 UTC m=+0.080829133 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:08:36 localhost podman[335505]: 2025-10-05 10:08:36.068990357 +0000 UTC m=+0.089713064 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:08:36 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:08:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e170 e170: 6 total, 6 up, 6 in Oct 5 06:08:36 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:36.864 2 INFO neutron.agent.securitygroups_rpc [None req-ee8e0800-10a4-4955-b11c-b453e546bec5 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:37 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4110357310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:37 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4110357310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 192 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 1.5 MiB/s rd, 8.7 KiB/s wr, 198 op/s Oct 5 06:08:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:08:37 localhost podman[335528]: 2025-10-05 10:08:37.916019093 +0000 UTC m=+0.079536857 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350) Oct 5 06:08:37 localhost podman[335528]: 2025-10-05 10:08:37.928470151 +0000 UTC m=+0.091987905 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:08:37 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:08:38 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:38.236 2 INFO neutron.agent.securitygroups_rpc [None req-ea92753e-0357-45dc-8c63-b37c505d2763 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.884 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.885 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:08:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:08:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:39 localhost nova_compute[297130]: 2025-10-05 10:08:39.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 192 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 5.0 KiB/s wr, 126 op/s Oct 5 06:08:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e171 e171: 6 total, 6 up, 6 in Oct 5 06:08:40 localhost nova_compute[297130]: 2025-10-05 10:08:40.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:40.939 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 10.100.0.2 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:40.941 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:40.943 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:40 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:40.943 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[0fffca24-136c-408a-9d32-7330d69e0159]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:08:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:08:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:08:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:08:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 192 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 5.0 KiB/s wr, 127 op/s Oct 5 06:08:41 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:41.731 2 INFO neutron.agent.securitygroups_rpc [None req-7e721a7e-d21f-4645-a45c-3d855438b9f0 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:08:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:08:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e172 e172: 6 total, 6 up, 6 in Oct 5 06:08:41 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:41.881 2 INFO neutron.agent.securitygroups_rpc [None req-46b5e596-709d-4f9e-9c8c-ea4b8685f500 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:08:42 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:42.496 2 INFO neutron.agent.securitygroups_rpc [None req-7d93ff03-dd34-4397-8de2-2f769fd63ab7 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 238 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 218 op/s Oct 5 06:08:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:08:43 localhost podman[335547]: 2025-10-05 10:08:43.925724583 +0000 UTC m=+0.089575739 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:08:43 localhost podman[335547]: 2025-10-05 10:08:43.95733928 +0000 UTC m=+0.121190416 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:08:43 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:08:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:44 localhost nova_compute[297130]: 2025-10-05 10:08:44.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:44 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/858014342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:44 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/858014342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:45.256 2 INFO neutron.agent.securitygroups_rpc [None req-e552e646-5615-4c88-85b4-014221181c72 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:45 localhost nova_compute[297130]: 2025-10-05 10:08:45.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 238 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 101 op/s Oct 5 06:08:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:45.938 2 INFO neutron.agent.securitygroups_rpc [None req-aeaf9d2a-4214-4a38-954b-4933dbf990b2 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:46 localhost openstack_network_exporter[250246]: ERROR 10:08:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:46 localhost openstack_network_exporter[250246]: ERROR 10:08:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:08:46 localhost openstack_network_exporter[250246]: ERROR 10:08:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:08:46 localhost openstack_network_exporter[250246]: ERROR 10:08:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:08:46 localhost openstack_network_exporter[250246]: Oct 5 06:08:46 localhost openstack_network_exporter[250246]: ERROR 10:08:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:08:46 localhost openstack_network_exporter[250246]: Oct 5 06:08:47 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:47.073 2 INFO neutron.agent.securitygroups_rpc [None req-dc69c701-7ca0-4311-bb6d-f3f426b0fdc0 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['72f8357d-4c2a-4c55-a9b5-4ba9e09e68d5']#033[00m Oct 5 06:08:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 192 MiB data, 900 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 149 op/s Oct 5 06:08:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e173 e173: 6 total, 6 up, 6 in Oct 5 06:08:48 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:48.869 2 INFO neutron.agent.securitygroups_rpc [None req-97ecff30-885e-46f1-be36-64d2a7b05cf3 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:48.976 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3520bd-795e-496b-9bd8-63b98bafb741, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=405e4ec1-95c6-4d96-9868-4d6f8824ae0c) old=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:48.978 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 405e4ec1-95c6-4d96-9868-4d6f8824ae0c in datapath 6c5c636c-bc8a-429a-8f10-8f4508a77c3b updated#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:48.980 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:48 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:48.981 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[8d13f294-2a73-4336-9dce-7f9f3dfcc179]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:49 localhost nova_compute[297130]: 2025-10-05 10:08:49.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:49 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:49.312 2 INFO neutron.agent.securitygroups_rpc [None req-d91fca24-9e84-4376-871e-8b0391de4b7d f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:08:49.411 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:08:49Z, description=, device_id=7317d80c-2795-4c74-8928-447aa6ca9c1b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7b8c121b-5561-4a56-998a-d460004fcdd4, ip_allocation=immediate, mac_address=fa:16:3e:50:55:79, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2586, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:08:49Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:08:49 localhost systemd[1]: tmp-crun.8igS01.mount: Deactivated successfully. Oct 5 06:08:49 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:08:49 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:08:49 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:08:49 localhost podman[335580]: 2025-10-05 10:08:49.65203299 +0000 UTC m=+0.067455470 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:08:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 192 MiB data, 900 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 149 op/s Oct 5 06:08:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:08:49 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2717397261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:08:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:08:49 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2717397261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:08:49 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:49.896 2 INFO neutron.agent.securitygroups_rpc [None req-441d5573-cc7b-44a8-a501-d1e820729695 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['72f8357d-4c2a-4c55-a9b5-4ba9e09e68d5', 'faf5b389-f9b1-4f45-9607-3142b5368a3b']#033[00m Oct 5 06:08:49 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:08:49.918 271653 INFO neutron.agent.dhcp.agent [None req-69dead89-ec40-4fa2-a3cf-78e2ebf15149 - - - - - -] DHCP configuration for ports {'7b8c121b-5561-4a56-998a-d460004fcdd4'} is completed#033[00m Oct 5 06:08:50 localhost nova_compute[297130]: 2025-10-05 10:08:50.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:50 localhost nova_compute[297130]: 2025-10-05 10:08:50.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:50 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:50.875 2 INFO neutron.agent.securitygroups_rpc [None req-1ce17b82-de95-46e3-b5c3-60132d9e9999 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['faf5b389-f9b1-4f45-9607-3142b5368a3b']#033[00m Oct 5 06:08:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 192 MiB data, 900 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s Oct 5 06:08:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e174 e174: 6 total, 6 up, 6 in Oct 5 06:08:52 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:52.073 2 INFO neutron.agent.securitygroups_rpc [None req-32c7cc31-7050-4c63-9938-69794efbcacf f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c7529e7-e2a4-4bb9-b51d-a992a432a8d2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:08:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, vol_name:cephfs) < "" Oct 5 06:08:52 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:08:52.471+0000 7f417fc90640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:08:52.471+0000 7f417fc90640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:08:52.471+0000 7f417fc90640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:08:52.471+0000 7f417fc90640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:08:52.471+0000 7f417fc90640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:08:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c7529e7-e2a4-4bb9-b51d-a992a432a8d2/.meta.tmp' Oct 5 06:08:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c7529e7-e2a4-4bb9-b51d-a992a432a8d2/.meta.tmp' to config b'/volumes/_nogroup/6c7529e7-e2a4-4bb9-b51d-a992a432a8d2/.meta' Oct 5 06:08:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, vol_name:cephfs) < "" Oct 5 06:08:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c7529e7-e2a4-4bb9-b51d-a992a432a8d2", "format": "json"}]: dispatch Oct 5 06:08:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, vol_name:cephfs) < "" Oct 5 06:08:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, vol_name:cephfs) < "" Oct 5 06:08:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:08:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:08:52 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:52.922 2 INFO neutron.agent.securitygroups_rpc [None req-03b7ae7c-5799-41ca-93aa-636cb21bad68 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:52 localhost podman[335614]: 2025-10-05 10:08:52.919098237 +0000 UTC m=+0.079597818 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:08:52 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e175 e175: 6 total, 6 up, 6 in Oct 5 06:08:52 localhost podman[335614]: 2025-10-05 10:08:52.936283703 +0000 UTC m=+0.096783294 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible) Oct 5 06:08:52 localhost podman[335615]: 2025-10-05 10:08:52.982757154 +0000 UTC m=+0.139948356 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:08:52 localhost podman[335615]: 2025-10-05 10:08:52.989555797 +0000 UTC m=+0.146746999 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:08:53 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:08:53 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:08:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 192 MiB data, 879 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 8.2 KiB/s wr, 96 op/s Oct 5 06:08:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e176 e176: 6 total, 6 up, 6 in Oct 5 06:08:53 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Oct 5 06:08:54 localhost nova_compute[297130]: 2025-10-05 10:08:54.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e177 e177: 6 total, 6 up, 6 in Oct 5 06:08:55 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:55.094 2 INFO neutron.agent.securitygroups_rpc [None req-47502f13-ad77-4309-9317-34e0f28a5ba4 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['e41dff43-d69f-4ffb-9be8-bbcee95191da']#033[00m Oct 5 06:08:55 localhost nova_compute[297130]: 2025-10-05 10:08:55.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:55 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:55.542 2 INFO neutron.agent.securitygroups_rpc [None req-91dbb4a3-e766-4b5e-bc7e-962722a958cb f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 192 MiB data, 879 MiB used, 41 GiB / 42 GiB avail; 105 KiB/s rd, 12 KiB/s wr, 144 op/s Oct 5 06:08:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:08:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:08:55 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:55.968 2 INFO neutron.agent.securitygroups_rpc [None req-35702abc-e8fd-459a-81a9-67f5d42d3960 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:55 localhost nova_compute[297130]: 2025-10-05 10:08:55.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:56 localhost systemd[1]: tmp-crun.blMDoV.mount: Deactivated successfully. Oct 5 06:08:56 localhost podman[335654]: 2025-10-05 10:08:56.008457669 +0000 UTC m=+0.167808421 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:08:56 localhost podman[248157]: time="2025-10-05T10:08:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:08:56 localhost podman[335655]: 2025-10-05 10:08:55.969233386 +0000 UTC m=+0.125437973 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Oct 5 06:08:56 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:08:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:08:56 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta.tmp' Oct 5 06:08:56 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta.tmp' to config b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta' Oct 5 06:08:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:08:56 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "format": "json"}]: dispatch Oct 5 06:08:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:08:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:08:56 localhost podman[335655]: 2025-10-05 10:08:56.102646602 +0000 UTC m=+0.258851139 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible) Oct 5 06:08:56 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:08:56 localhost podman[335654]: 2025-10-05 10:08:56.123698583 +0000 UTC m=+0.283049345 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:08:56 localhost podman[248157]: @ - - [05/Oct/2025:10:08:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:08:56 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:08:56 localhost podman[248157]: @ - - [05/Oct/2025:10:08:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19337 "" "Go-http-client/1.1" Oct 5 06:08:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e178 e178: 6 total, 6 up, 6 in Oct 5 06:08:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:57.412 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a3520bd-795e-496b-9bd8-63b98bafb741, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=405e4ec1-95c6-4d96-9868-4d6f8824ae0c) old=Port_Binding(mac=['fa:16:3e:c1:f0:65 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5c636c-bc8a-429a-8f10-8f4508a77c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2943591b4b454696b34524fb1ef8a7d5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:57.414 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 405e4ec1-95c6-4d96-9868-4d6f8824ae0c in datapath 6c5c636c-bc8a-429a-8f10-8f4508a77c3b updated#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:57.416 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5c636c-bc8a-429a-8f10-8f4508a77c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:08:57 localhost ovn_metadata_agent[163196]: 2025-10-05 10:08:57.418 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[55af4b80-6b1a-48c8-bcc7-7674c5da0814]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:08:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 192 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 10 KiB/s wr, 132 op/s Oct 5 06:08:58 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:58.262 2 INFO neutron.agent.securitygroups_rpc [None req-375d4cef-b02d-4920-a219-c43790d6eb04 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['3f24eb1d-3619-4317-aa94-a0a6422fd556', '5494f8cd-e84c-4ce4-b27e-50351805d667', 'e41dff43-d69f-4ffb-9be8-bbcee95191da']#033[00m Oct 5 06:08:58 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:58.796 2 INFO neutron.agent.securitygroups_rpc [None req-8bc387b0-c1e0-4b39-8ca0-f92bc5e66ef6 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['3f24eb1d-3619-4317-aa94-a0a6422fd556', '5494f8cd-e84c-4ce4-b27e-50351805d667']#033[00m Oct 5 06:08:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:08:59 localhost nova_compute[297130]: 2025-10-05 10:08:59.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:08:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "snap_name": "e3280645-f6a9-44fb-b48f-b95e956dd00d", "format": "json"}]: dispatch Oct 5 06:08:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e3280645-f6a9-44fb-b48f-b95e956dd00d, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:08:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e3280645-f6a9-44fb-b48f-b95e956dd00d, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:08:59 localhost neutron_sriov_agent[264647]: 2025-10-05 10:08:59.674 2 INFO neutron.agent.securitygroups_rpc [None req-9686299d-cab6-47d3-9abe-3e81fe4f3e3c f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:08:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 192 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 8.0 KiB/s wr, 105 op/s Oct 5 06:09:00 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:00.219 2 INFO neutron.agent.securitygroups_rpc [None req-bc48c6e7-0c03-4ca1-8c93-290080c26aa7 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:00 localhost nova_compute[297130]: 2025-10-05 10:09:00.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:00 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:00.849 2 INFO neutron.agent.securitygroups_rpc [None req-bfed7ab9-dd4e-4a63-bc1b-c6d9d4378152 7b16fbc83efb4f4e9736b90968ace47e 2943591b4b454696b34524fb1ef8a7d5 - - default default] Security group member updated ['403ef325-843a-42e9-9412-a4f8fc546f92']#033[00m Oct 5 06:09:00 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:09:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:00 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:09:00 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:09:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:00 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "format": "json"}]: dispatch Oct 5 06:09:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 192 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 6.2 KiB/s wr, 81 op/s Oct 5 06:09:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e179 e179: 6 total, 6 up, 6 in Oct 5 06:09:03 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:03.652 2 INFO neutron.agent.securitygroups_rpc [None req-3960a7db-6f98-4f76-bedd-171182bec9e5 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 238 MiB data, 930 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 160 op/s Oct 5 06:09:03 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:04 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:04.184 2 INFO neutron.agent.securitygroups_rpc [None req-12a6e729-e23a-4a91-b1d1-b640ebdb282f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b21e0e22-084f-4de1-a6b7-857a600c8cfd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:09:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:04 localhost nova_compute[297130]: 2025-10-05 10:09:04.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:04 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b21e0e22-084f-4de1-a6b7-857a600c8cfd/.meta.tmp' Oct 5 06:09:04 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b21e0e22-084f-4de1-a6b7-857a600c8cfd/.meta.tmp' to config b'/volumes/_nogroup/b21e0e22-084f-4de1-a6b7-857a600c8cfd/.meta' Oct 5 06:09:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b21e0e22-084f-4de1-a6b7-857a600c8cfd", "format": "json"}]: dispatch Oct 5 06:09:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:09:05 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1796317667' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:09:05 localhost nova_compute[297130]: 2025-10-05 10:09:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 238 MiB data, 930 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 143 op/s Oct 5 06:09:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:05 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2053914468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:05 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2053914468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e180 e180: 6 total, 6 up, 6 in Oct 5 06:09:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "snap_name": "e3280645-f6a9-44fb-b48f-b95e956dd00d_d7102057-4305-4f02-ba3d-784c6e4fe8fd", "force": true, "format": "json"}]: dispatch Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3280645-f6a9-44fb-b48f-b95e956dd00d_d7102057-4305-4f02-ba3d-784c6e4fe8fd, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta.tmp' Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta.tmp' to config b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta' Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3280645-f6a9-44fb-b48f-b95e956dd00d_d7102057-4305-4f02-ba3d-784c6e4fe8fd, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:09:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "snap_name": "e3280645-f6a9-44fb-b48f-b95e956dd00d", "force": true, "format": "json"}]: dispatch Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3280645-f6a9-44fb-b48f-b95e956dd00d, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta.tmp' Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta.tmp' to config b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c/.meta' Oct 5 06:09:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e3280645-f6a9-44fb-b48f-b95e956dd00d, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:09:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:06.815 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8:0:1:f816:3eff:fe6e:70bd'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a0dd853-b6a5-40b4-b4b0-34529187f2ad, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b2d613f2-c0ef-46b8-96cf-a8caa2176163) old=Port_Binding(mac=['fa:16:3e:6e:70:bd 2001:db8::f816:3eff:fe6e:70bd'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:70bd/64', 'neutron:device_id': 'ovnmeta-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bd6f3dd-fb92-442c-9990-66b374f9f0fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25c75a84dcbe4bb6ba4688edae1e525f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:06.816 163201 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b2d613f2-c0ef-46b8-96cf-a8caa2176163 in datapath 2bd6f3dd-fb92-442c-9990-66b374f9f0fb updated#033[00m Oct 5 06:09:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:06.818 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bd6f3dd-fb92-442c-9990-66b374f9f0fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:09:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:06.820 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[29561285-345b-4604-b352-631467b73ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:09:06 localhost podman[335697]: 2025-10-05 10:09:06.928790919 +0000 UTC m=+0.086587979 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 06:09:06 localhost podman[335697]: 2025-10-05 10:09:06.953347525 +0000 UTC m=+0.111144595 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:09:06 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:09:07 localhost podman[335698]: 2025-10-05 10:09:07.033422536 +0000 UTC m=+0.188603795 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:09:07 localhost podman[335698]: 2025-10-05 10:09:07.045095663 +0000 UTC m=+0.200276862 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:09:07 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:09:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e181 e181: 6 total, 6 up, 6 in Oct 5 06:09:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:07.214 2 INFO neutron.agent.securitygroups_rpc [None req-cba438c9-4daa-407e-a6d9-a2fa2c21cb42 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b21e0e22-084f-4de1-a6b7-857a600c8cfd", "new_size": 2147483648, "format": "json"}]: dispatch Oct 5 06:09:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:07 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:07.726 2 INFO neutron.agent.securitygroups_rpc [None req-db818d45-4c35-4d98-b7ca-8b1be2e037ff f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 238 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 148 op/s Oct 5 06:09:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e182 e182: 6 total, 6 up, 6 in Oct 5 06:09:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:09:08 localhost systemd[1]: tmp-crun.TGGm1X.mount: Deactivated successfully. Oct 5 06:09:08 localhost podman[335739]: 2025-10-05 10:09:08.932510647 +0000 UTC m=+0.098326567 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vcs-type=git, architecture=x86_64, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal) Oct 5 06:09:08 localhost podman[335739]: 2025-10-05 10:09:08.952091398 +0000 UTC m=+0.117907318 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7) Oct 5 06:09:08 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:09:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e183 e183: 6 total, 6 up, 6 in Oct 5 06:09:09 localhost nova_compute[297130]: 2025-10-05 10:09:09.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 238 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 26 KiB/s wr, 61 op/s Oct 5 06:09:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "format": "json"}]: dispatch Oct 5 06:09:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:09 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f27f66c-9967-4c8b-a986-9c8ef1f5846c' of type subvolume Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.739+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f27f66c-9967-4c8b-a986-9c8ef1f5846c' of type subvolume Oct 5 06:09:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f27f66c-9967-4c8b-a986-9c8ef1f5846c", "force": true, "format": "json"}]: dispatch Oct 5 06:09:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:09:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2f27f66c-9967-4c8b-a986-9c8ef1f5846c'' moved to trashcan Oct 5 06:09:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f27f66c-9967-4c8b-a986-9c8ef1f5846c, vol_name:cephfs) < "" Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.758+0000 7f4182c96640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.758+0000 7f4182c96640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.758+0000 7f4182c96640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.758+0000 7f4182c96640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.758+0000 7f4182c96640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.786+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.786+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.786+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.786+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:09.786+0000 7f4181c94640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:09:10 localhost nova_compute[297130]: 2025-10-05 10:09:10.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:10 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:10.696 2 INFO neutron.agent.securitygroups_rpc [None req-bd6169c8-5f25-4b33-a37a-6328fb4320f6 f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b21e0e22-084f-4de1-a6b7-857a600c8cfd", "format": "json"}]: dispatch Oct 5 06:09:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:10 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b21e0e22-084f-4de1-a6b7-857a600c8cfd' of type subvolume Oct 5 06:09:10 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:10.833+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b21e0e22-084f-4de1-a6b7-857a600c8cfd' of type subvolume Oct 5 06:09:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b21e0e22-084f-4de1-a6b7-857a600c8cfd", "force": true, "format": "json"}]: dispatch Oct 5 06:09:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b21e0e22-084f-4de1-a6b7-857a600c8cfd'' moved to trashcan Oct 5 06:09:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b21e0e22-084f-4de1-a6b7-857a600c8cfd, vol_name:cephfs) < "" Oct 5 06:09:11 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:11.110 2 INFO neutron.agent.securitygroups_rpc [None req-19f2ba91-a8f2-4e9b-bcd7-482a8c4e1e9f f14d23bc33c149adbfd2bfec2aa44b4b 25c75a84dcbe4bb6ba4688edae1e525f - - default default] Security group member updated ['549c7104-f83b-4b0c-9962-0a1889fe4d9d']#033[00m Oct 5 06:09:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3337915524' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3337915524' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:09:11 Oct 5 06:09:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:09:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:09:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_metadata', 'vms', 'manila_data', '.mgr', 'backups', 'images', 'volumes'] Oct 5 06:09:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:09:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:09:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:09:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:09:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:09:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 238 MiB data, 946 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 43 op/s Oct 5 06:09:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:09:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:09:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e184 e184: 6 total, 6 up, 6 in Oct 5 06:09:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:09:11 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/710317095' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00296840103294249 of space, bias 1.0, pg target 0.5926907395775172 quantized to 32 (current 32) Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32) Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021665038153731623 quantized to 32 (current 32) Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:09:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 1.3086264656616416e-05 of space, bias 4.0, pg target 0.010399218313791179 quantized to 16 (current 16) Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:09:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:09:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c7529e7-e2a4-4bb9-b51d-a992a432a8d2", "format": "json"}]: dispatch Oct 5 06:09:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:13 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c7529e7-e2a4-4bb9-b51d-a992a432a8d2' of type subvolume Oct 5 06:09:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:13.149+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c7529e7-e2a4-4bb9-b51d-a992a432a8d2' of type subvolume Oct 5 06:09:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c7529e7-e2a4-4bb9-b51d-a992a432a8d2", "force": true, "format": "json"}]: dispatch Oct 5 06:09:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, vol_name:cephfs) < "" Oct 5 06:09:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6c7529e7-e2a4-4bb9-b51d-a992a432a8d2'' moved to trashcan Oct 5 06:09:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c7529e7-e2a4-4bb9-b51d-a992a432a8d2, vol_name:cephfs) < "" Oct 5 06:09:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e185 e185: 6 total, 6 up, 6 in Oct 5 06:09:13 localhost ceph-mgr[301363]: [devicehealth INFO root] Check health Oct 5 06:09:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 302 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 7.6 MiB/s rd, 4.4 MiB/s wr, 145 op/s Oct 5 06:09:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fa63abd-9d90-46b9-a84e-463b6f76b152", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:09:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fa63abd-9d90-46b9-a84e-463b6f76b152/.meta.tmp' Oct 5 06:09:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fa63abd-9d90-46b9-a84e-463b6f76b152/.meta.tmp' to config b'/volumes/_nogroup/6fa63abd-9d90-46b9-a84e-463b6f76b152/.meta' Oct 5 06:09:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fa63abd-9d90-46b9-a84e-463b6f76b152", "format": "json"}]: dispatch Oct 5 06:09:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:14 localhost nova_compute[297130]: 2025-10-05 10:09:14.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:09:14 localhost podman[335783]: 2025-10-05 10:09:14.90215032 +0000 UTC m=+0.072152357 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:09:14 localhost podman[335783]: 2025-10-05 10:09:14.937577951 +0000 UTC m=+0.107580038 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS) Oct 5 06:09:14 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:09:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e186 e186: 6 total, 6 up, 6 in Oct 5 06:09:15 localhost nova_compute[297130]: 2025-10-05 10:09:15.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 302 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.0 MiB/s wr, 133 op/s Oct 5 06:09:16 localhost nova_compute[297130]: 2025-10-05 10:09:16.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:16 localhost nova_compute[297130]: 2025-10-05 10:09:16.275 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:16 localhost openstack_network_exporter[250246]: ERROR 10:09:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:16 localhost openstack_network_exporter[250246]: ERROR 10:09:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:16 localhost openstack_network_exporter[250246]: ERROR 10:09:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:09:16 localhost openstack_network_exporter[250246]: ERROR 10:09:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:09:16 localhost openstack_network_exporter[250246]: Oct 5 06:09:16 localhost openstack_network_exporter[250246]: ERROR 10:09:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:09:16 localhost openstack_network_exporter[250246]: Oct 5 06:09:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e187 e187: 6 total, 6 up, 6 in Oct 5 06:09:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "6fa63abd-9d90-46b9-a84e-463b6f76b152", "new_size": 2147483648, "format": "json"}]: dispatch Oct 5 06:09:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 238 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 335 op/s Oct 5 06:09:18 localhost nova_compute[297130]: 2025-10-05 10:09:18.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:18 localhost nova_compute[297130]: 2025-10-05 10:09:18.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:09:18 localhost nova_compute[297130]: 2025-10-05 10:09:18.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:09:18 localhost nova_compute[297130]: 2025-10-05 10:09:18.297 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:09:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e188 e188: 6 total, 6 up, 6 in Oct 5 06:09:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:19 localhost nova_compute[297130]: 2025-10-05 10:09:19.291 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:19 localhost nova_compute[297130]: 2025-10-05 10:09:19.291 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:19 localhost nova_compute[297130]: 2025-10-05 10:09:19.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e189 e189: 6 total, 6 up, 6 in Oct 5 06:09:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 238 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 4.2 MiB/s wr, 273 op/s Oct 5 06:09:20 localhost nova_compute[297130]: 2025-10-05 10:09:20.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:20.407 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:09:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:09:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:09:20 localhost nova_compute[297130]: 2025-10-05 10:09:20.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fa63abd-9d90-46b9-a84e-463b6f76b152", "format": "json"}]: dispatch Oct 5 06:09:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:21 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fa63abd-9d90-46b9-a84e-463b6f76b152' of type subvolume Oct 5 06:09:21 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:21.088+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fa63abd-9d90-46b9-a84e-463b6f76b152' of type subvolume Oct 5 06:09:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fa63abd-9d90-46b9-a84e-463b6f76b152", "force": true, "format": "json"}]: dispatch Oct 5 06:09:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fa63abd-9d90-46b9-a84e-463b6f76b152'' moved to trashcan Oct 5 06:09:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fa63abd-9d90-46b9-a84e-463b6f76b152, vol_name:cephfs) < "" Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.299 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.300 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.300 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.301 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.301 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:09:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:21.703 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:09:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:21.705 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:09:21 localhost ovn_metadata_agent[163196]: 2025-10-05 10:09:21.706 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:09:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3920797673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:09:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 238 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 311 KiB/s rd, 3.1 MiB/s wr, 199 op/s Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.753 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:09:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e190 e190: 6 total, 6 up, 6 in Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.951 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.953 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11547MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.953 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:09:21 localhost nova_compute[297130]: 2025-10-05 10:09:21.954 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:09:22 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:22.016 2 INFO neutron.agent.securitygroups_rpc [None req-fad1303f-10e1-4f92-96c1-525ebd79c22d c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.035 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.036 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.057 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:09:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:09:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3891463745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.483 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.489 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.507 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.510 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:09:22 localhost nova_compute[297130]: 2025-10-05 10:09:22.511 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:09:22 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:22.927 2 INFO neutron.agent.securitygroups_rpc [None req-f5cc318a-721d-403f-b0ec-8fc7507ec8fd c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:23 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:23.078 2 INFO neutron.agent.securitygroups_rpc [None req-c278cc7e-1b90-41ba-a679-990bec890d12 f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group rule updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:09:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 192 MiB data, 907 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 14 KiB/s wr, 85 op/s Oct 5 06:09:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:09:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:09:23 localhost podman[335846]: 2025-10-05 10:09:23.918682863 +0000 UTC m=+0.080953806 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:09:23 localhost podman[335846]: 2025-10-05 10:09:23.925538188 +0000 UTC m=+0.087809131 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:09:23 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:09:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:24 localhost systemd[1]: tmp-crun.VBW8rc.mount: Deactivated successfully. Oct 5 06:09:24 localhost podman[335845]: 2025-10-05 10:09:24.027663388 +0000 UTC m=+0.192526682 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd) Oct 5 06:09:24 localhost podman[335845]: 2025-10-05 10:09:24.043794854 +0000 UTC m=+0.208658108 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Oct 5 06:09:24 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:09:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:24.184 2 INFO neutron.agent.securitygroups_rpc [None req-ed7161ea-821e-4844-93bf-e3373bfef5f6 f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group rule updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:09:24 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:24.326 2 INFO neutron.agent.securitygroups_rpc [None req-0d172463-a426-431e-b15d-cf3f700edad7 c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5ddebd16-0815-4674-8596-6df089a264e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:09:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5ddebd16-0815-4674-8596-6df089a264e5, vol_name:cephfs) < "" Oct 5 06:09:24 localhost nova_compute[297130]: 2025-10-05 10:09:24.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5ddebd16-0815-4674-8596-6df089a264e5/.meta.tmp' Oct 5 06:09:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5ddebd16-0815-4674-8596-6df089a264e5/.meta.tmp' to config b'/volumes/_nogroup/5ddebd16-0815-4674-8596-6df089a264e5/.meta' Oct 5 06:09:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5ddebd16-0815-4674-8596-6df089a264e5, vol_name:cephfs) < "" Oct 5 06:09:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5ddebd16-0815-4674-8596-6df089a264e5", "format": "json"}]: dispatch Oct 5 06:09:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5ddebd16-0815-4674-8596-6df089a264e5, vol_name:cephfs) < "" Oct 5 06:09:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5ddebd16-0815-4674-8596-6df089a264e5, vol_name:cephfs) < "" Oct 5 06:09:24 localhost nova_compute[297130]: 2025-10-05 10:09:24.510 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:24 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:24.971 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:09:24Z, description=, device_id=52499232-822f-4eab-b447-7060aada031a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=06f39db9-e5df-4c05-8e42-c0ebc8e17ee5, ip_allocation=immediate, mac_address=fa:16:3e:c2:36:e0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2736, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:09:24Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:09:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3041055549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3041055549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:25 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:09:25 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:09:25 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:09:25 localhost podman[335903]: 2025-10-05 10:09:25.207097696 +0000 UTC m=+0.060483172 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:09:25 localhost nova_compute[297130]: 2025-10-05 10:09:25.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.637196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965637230, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2726, "num_deletes": 262, "total_data_size": 4634718, "memory_usage": 4695408, "flush_reason": "Manual Compaction"} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965659292, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3027410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22683, "largest_seqno": 25404, "table_properties": {"data_size": 3016907, "index_size": 6624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24304, "raw_average_key_size": 21, "raw_value_size": 2995051, "raw_average_value_size": 2710, "num_data_blocks": 284, "num_entries": 1105, "num_filter_entries": 1105, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658819, "oldest_key_time": 1759658819, "file_creation_time": 1759658965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 22159 microseconds, and 7557 cpu microseconds. Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.659350) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3027410 bytes OK Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.659377) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.661057) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.661079) EVENT_LOG_v1 {"time_micros": 1759658965661073, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.661100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4622274, prev total WAL file size 4622274, number of live WAL files 2. Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.662366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2956KB)], [36(13MB)] Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965662449, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 17528545, "oldest_snapshot_seqno": -1} Oct 5 06:09:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 192 MiB data, 907 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 70 op/s Oct 5 06:09:25 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:25.764 271653 INFO neutron.agent.dhcp.agent [None req-175d54c3-0658-48d9-bbfb-c9d14926267a - - - - - -] DHCP configuration for ports {'06f39db9-e5df-4c05-8e42-c0ebc8e17ee5'} is completed#033[00m Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 13102 keys, 16448887 bytes, temperature: kUnknown Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965767737, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 16448887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16374595, "index_size": 40463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32773, "raw_key_size": 352554, "raw_average_key_size": 26, "raw_value_size": 16151951, "raw_average_value_size": 1232, "num_data_blocks": 1513, "num_entries": 13102, "num_filter_entries": 13102, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759658965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.768094) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 16448887 bytes Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.770107) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.3 rd, 156.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 13.8 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(11.2) write-amplify(5.4) OK, records in: 13640, records dropped: 538 output_compression: NoCompression Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.770136) EVENT_LOG_v1 {"time_micros": 1759658965770122, "job": 20, "event": "compaction_finished", "compaction_time_micros": 105414, "compaction_time_cpu_micros": 48656, "output_level": 6, "num_output_files": 1, "total_output_size": 16448887, "num_input_records": 13640, "num_output_records": 13102, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965770750, "job": 20, "event": "table_file_deletion", "file_number": 38} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759658965773213, "job": 20, "event": "table_file_deletion", "file_number": 36} Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.662290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.773302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.773311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.773314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.773317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:09:25.773320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:09:26 localhost podman[248157]: time="2025-10-05T10:09:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:09:26 localhost podman[248157]: @ - - [05/Oct/2025:10:09:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:09:26 localhost podman[248157]: @ - - [05/Oct/2025:10:09:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19340 "" "Go-http-client/1.1" Oct 5 06:09:26 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:26.225 2 INFO neutron.agent.securitygroups_rpc [None req-6e3cb276-bf4f-4dda-9d60-a803c8ae9afd c9709adfed054f448254a4bcf5f9f2b1 b103796d13b94d8190276faed33a3c03 - - default default] Security group member updated ['f4b0fb50-401c-4073-88d7-f445d90ddf1f']#033[00m Oct 5 06:09:26 localhost nova_compute[297130]: 2025-10-05 10:09:26.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:09:26 localhost nova_compute[297130]: 2025-10-05 10:09:26.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:09:26 localhost nova_compute[297130]: 2025-10-05 10:09:26.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e191 e191: 6 total, 6 up, 6 in Oct 5 06:09:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:09:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:09:26 localhost podman[335924]: 2025-10-05 10:09:26.928821465 +0000 UTC m=+0.093209298 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible) Oct 5 06:09:26 localhost systemd[1]: tmp-crun.jhdo1B.mount: Deactivated successfully. Oct 5 06:09:26 localhost podman[335925]: 2025-10-05 10:09:26.975201023 +0000 UTC m=+0.135569527 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:09:26 localhost podman[335924]: 2025-10-05 10:09:26.98911526 +0000 UTC m=+0.153503043 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid) Oct 5 06:09:27 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:09:27 localhost podman[335925]: 2025-10-05 10:09:27.037240395 +0000 UTC m=+0.197608829 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller) Oct 5 06:09:27 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:09:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5ddebd16-0815-4674-8596-6df089a264e5", "format": "json"}]: dispatch Oct 5 06:09:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5ddebd16-0815-4674-8596-6df089a264e5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5ddebd16-0815-4674-8596-6df089a264e5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:27 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:27.698+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5ddebd16-0815-4674-8596-6df089a264e5' of type subvolume Oct 5 06:09:27 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5ddebd16-0815-4674-8596-6df089a264e5' of type subvolume Oct 5 06:09:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5ddebd16-0815-4674-8596-6df089a264e5", "force": true, "format": "json"}]: dispatch Oct 5 06:09:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5ddebd16-0815-4674-8596-6df089a264e5, vol_name:cephfs) < "" Oct 5 06:09:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5ddebd16-0815-4674-8596-6df089a264e5'' moved to trashcan Oct 5 06:09:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5ddebd16-0815-4674-8596-6df089a264e5, vol_name:cephfs) < "" Oct 5 06:09:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 192 MiB data, 907 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 15 KiB/s wr, 121 op/s Oct 5 06:09:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:29 localhost nova_compute[297130]: 2025-10-05 10:09:29.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 192 MiB data, 907 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 15 KiB/s wr, 121 op/s Oct 5 06:09:30 localhost nova_compute[297130]: 2025-10-05 10:09:30.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:30 localhost nova_compute[297130]: 2025-10-05 10:09:30.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "017ec640-5314-4bb2-845f-218f4d3d87fa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:09:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:017ec640-5314-4bb2-845f-218f4d3d87fa, vol_name:cephfs) < "" Oct 5 06:09:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/017ec640-5314-4bb2-845f-218f4d3d87fa/.meta.tmp' Oct 5 06:09:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/017ec640-5314-4bb2-845f-218f4d3d87fa/.meta.tmp' to config b'/volumes/_nogroup/017ec640-5314-4bb2-845f-218f4d3d87fa/.meta' Oct 5 06:09:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:017ec640-5314-4bb2-845f-218f4d3d87fa, vol_name:cephfs) < "" Oct 5 06:09:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "017ec640-5314-4bb2-845f-218f4d3d87fa", "format": "json"}]: dispatch Oct 5 06:09:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:017ec640-5314-4bb2-845f-218f4d3d87fa, vol_name:cephfs) < "" Oct 5 06:09:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:017ec640-5314-4bb2-845f-218f4d3d87fa, vol_name:cephfs) < "" Oct 5 06:09:31 localhost podman[336074]: 2025-10-05 10:09:31.261497686 +0000 UTC m=+0.095268294 container exec 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.buildah.version=1.33.12) Oct 5 06:09:31 localhost podman[336074]: 2025-10-05 10:09:31.401326927 +0000 UTC m=+0.235097525 container exec_died 89e4770b0c4f4582cc6bf46306697c1eb1800fa959640273452bdea4a088315b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-659062ac-50b4-5607-b699-3105da7f55ee-crash-np0005471152, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, RELEASE=main, io.openshift.expose-services=, vcs-type=git) Oct 5 06:09:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 192 MiB data, 907 MiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 12 KiB/s wr, 98 op/s Oct 5 06:09:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:09:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:09:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:09:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:09:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:09:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:09:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:32.441 2 INFO neutron.agent.securitygroups_rpc [req-6353351a-5432-4eff-bd51-af198ef2c8ab req-2f538ea6-c0e6-44bc-8666-e3e31a7af142 f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group member updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 06:09:33 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 06:09:33 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:33 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 06:09:33 localhost ceph-mgr[301363]: [cephadm INFO root] Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Oct 5 06:09:33 localhost ceph-mgr[301363]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:09:33 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 234c472d-8aa9-453a-9e15-cad04e7afc80 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:09:33 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 234c472d-8aa9-453a-9e15-cad04e7afc80 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:09:33 localhost ceph-mgr[301363]: [progress INFO root] Completed event 234c472d-8aa9-453a-9e15-cad04e7afc80 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:09:33 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:09:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 238 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 112 op/s Oct 5 06:09:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:34 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471152.localdomain to 836.6M Oct 5 06:09:34 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471151.localdomain to 836.6M Oct 5 06:09:34 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471152.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:34 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471151.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Oct 5 06:09:34 localhost ceph-mon[316511]: Adjusting osd_memory_target on np0005471150.localdomain to 836.6M Oct 5 06:09:34 localhost ceph-mon[316511]: Unable to set osd_memory_target on np0005471150.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Oct 5 06:09:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:09:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "017ec640-5314-4bb2-845f-218f4d3d87fa", "format": "json"}]: dispatch Oct 5 06:09:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:017ec640-5314-4bb2-845f-218f4d3d87fa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:017ec640-5314-4bb2-845f-218f4d3d87fa, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:34 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:34.171+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '017ec640-5314-4bb2-845f-218f4d3d87fa' of type subvolume Oct 5 06:09:34 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '017ec640-5314-4bb2-845f-218f4d3d87fa' of type subvolume Oct 5 06:09:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "017ec640-5314-4bb2-845f-218f4d3d87fa", "force": true, "format": "json"}]: dispatch Oct 5 06:09:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:017ec640-5314-4bb2-845f-218f4d3d87fa, vol_name:cephfs) < "" Oct 5 06:09:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/017ec640-5314-4bb2-845f-218f4d3d87fa'' moved to trashcan Oct 5 06:09:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:017ec640-5314-4bb2-845f-218f4d3d87fa, vol_name:cephfs) < "" Oct 5 06:09:34 localhost nova_compute[297130]: 2025-10-05 10:09:34.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:34 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:34.906 2 INFO neutron.agent.securitygroups_rpc [None req-8e52407e-22e3-4460-b563-f3ed98e26817 c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:35 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:35.343 2 INFO neutron.agent.securitygroups_rpc [None req-5a937b19-1c21-40ca-9dd1-c32713d70666 c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:35 localhost nova_compute[297130]: 2025-10-05 10:09:35.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 238 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 112 op/s Oct 5 06:09:35 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:35.767 2 INFO neutron.agent.securitygroups_rpc [None req-220e88cd-40cc-4db3-90a0-0428b7fdb7ba c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e192 e192: 6 total, 6 up, 6 in Oct 5 06:09:36 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:36.740 2 INFO neutron.agent.securitygroups_rpc [None req-0591fb42-029f-4e86-b8af-823ac9318d65 c55b4469474b45aa8c7e62a1c67e220f 093f417e0eff4abba8c994a4ac741c61 - - default default] Security group member updated ['d6e72ece-f511-4085-9183-e8d7395d0930']#033[00m Oct 5 06:09:36 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:09:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:09:37 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:09:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e193 e193: 6 total, 6 up, 6 in Oct 5 06:09:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "0e16f139-6110-4863-905f-8a0b59bd3ce2", "format": "json"}]: dispatch Oct 5 06:09:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0e16f139-6110-4863-905f-8a0b59bd3ce2, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0e16f139-6110-4863-905f-8a0b59bd3ce2, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 357 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 14 MiB/s wr, 214 op/s Oct 5 06:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:09:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:09:37 localhost systemd[1]: tmp-crun.h1C1yb.mount: Deactivated successfully. Oct 5 06:09:37 localhost podman[336277]: 2025-10-05 10:09:37.904882997 +0000 UTC m=+0.076472174 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:09:37 localhost podman[336277]: 2025-10-05 10:09:37.91418777 +0000 UTC m=+0.085777017 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:09:37 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:09:37 localhost podman[336278]: 2025-10-05 10:09:37.970187878 +0000 UTC m=+0.137210961 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:09:38 localhost podman[336278]: 2025-10-05 10:09:38.00603176 +0000 UTC m=+0.173054803 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:09:38 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:09:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:39 localhost nova_compute[297130]: 2025-10-05 10:09:39.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 357 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 14 MiB/s wr, 214 op/s Oct 5 06:09:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:09:39 localhost podman[336319]: 2025-10-05 10:09:39.914146495 +0000 UTC m=+0.081921223 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Oct 5 06:09:39 localhost podman[336319]: 2025-10-05 10:09:39.924874446 +0000 UTC m=+0.092649154 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal) Oct 5 06:09:39 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:09:40 localhost nova_compute[297130]: 2025-10-05 10:09:40.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "0e16f139-6110-4863-905f-8a0b59bd3ce2_0e449d7a-4c09-4b29-aa73-0644b69a4c3e", "force": true, "format": "json"}]: dispatch Oct 5 06:09:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e16f139-6110-4863-905f-8a0b59bd3ce2_0e449d7a-4c09-4b29-aa73-0644b69a4c3e, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e16f139-6110-4863-905f-8a0b59bd3ce2_0e449d7a-4c09-4b29-aa73-0644b69a4c3e, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "0e16f139-6110-4863-905f-8a0b59bd3ce2", "force": true, "format": "json"}]: dispatch Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e16f139-6110-4863-905f-8a0b59bd3ce2, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0e16f139-6110-4863-905f-8a0b59bd3ce2, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:09:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:09:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 357 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 218 KiB/s rd, 12 MiB/s wr, 131 op/s Oct 5 06:09:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "26258426-dfec-4b11-a47c-bf1ce0f1b857", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:09:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, vol_name:cephfs) < "" Oct 5 06:09:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/26258426-dfec-4b11-a47c-bf1ce0f1b857/.meta.tmp' Oct 5 06:09:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/26258426-dfec-4b11-a47c-bf1ce0f1b857/.meta.tmp' to config b'/volumes/_nogroup/26258426-dfec-4b11-a47c-bf1ce0f1b857/.meta' Oct 5 06:09:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, vol_name:cephfs) < "" Oct 5 06:09:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "26258426-dfec-4b11-a47c-bf1ce0f1b857", "format": "json"}]: dispatch Oct 5 06:09:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, vol_name:cephfs) < "" Oct 5 06:09:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, vol_name:cephfs) < "" Oct 5 06:09:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e194 e194: 6 total, 6 up, 6 in Oct 5 06:09:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 455 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 3.2 MiB/s rd, 32 MiB/s wr, 302 op/s Oct 5 06:09:43 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:43.834 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e195 e195: 6 total, 6 up, 6 in Oct 5 06:09:44 localhost nova_compute[297130]: 2025-10-05 10:09:44.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e196 e196: 6 total, 6 up, 6 in Oct 5 06:09:45 localhost nova_compute[297130]: 2025-10-05 10:09:45.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "26258426-dfec-4b11-a47c-bf1ce0f1b857", "format": "json"}]: dispatch Oct 5 06:09:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:09:45 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '26258426-dfec-4b11-a47c-bf1ce0f1b857' of type subvolume Oct 5 06:09:45 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:09:45.668+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '26258426-dfec-4b11-a47c-bf1ce0f1b857' of type subvolume Oct 5 06:09:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "26258426-dfec-4b11-a47c-bf1ce0f1b857", "force": true, "format": "json"}]: dispatch Oct 5 06:09:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, vol_name:cephfs) < "" Oct 5 06:09:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/26258426-dfec-4b11-a47c-bf1ce0f1b857'' moved to trashcan Oct 5 06:09:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:09:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:26258426-dfec-4b11-a47c-bf1ce0f1b857, vol_name:cephfs) < "" Oct 5 06:09:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:45.726 2 INFO neutron.agent.securitygroups_rpc [None req-e82370c2-2ab3-4aa4-b2e0-b7050fb43aea 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 455 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 3.7 MiB/s rd, 24 MiB/s wr, 201 op/s Oct 5 06:09:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:09:45 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:45.823 2 INFO neutron.agent.securitygroups_rpc [None req-e82370c2-2ab3-4aa4-b2e0-b7050fb43aea 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:45 localhost podman[336340]: 2025-10-05 10:09:45.91937021 +0000 UTC m=+0.084436880 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:09:45 localhost podman[336340]: 2025-10-05 10:09:45.924630143 +0000 UTC m=+0.089696803 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:09:45 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:09:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:46.237 2 INFO neutron.agent.securitygroups_rpc [None req-a2bafd38-32f2-4fe5-bbc1-171c22c32945 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:46 localhost neutron_sriov_agent[264647]: 2025-10-05 10:09:46.524 2 INFO neutron.agent.securitygroups_rpc [None req-171beb7e-4bcd-41f6-af20-b5b34448b785 978c796c7f894ed592893244265edb3c ec53f18d13214ccf80ee92ca6e4213ee - - default default] Security group member updated ['d315d073-b14e-490f-90b1-1d6b5febcd73']#033[00m Oct 5 06:09:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:09:46 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/467818136' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:09:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:09:46 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/467818136' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:09:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e197 e197: 6 total, 6 up, 6 in Oct 5 06:09:46 localhost openstack_network_exporter[250246]: ERROR 10:09:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:46 localhost openstack_network_exporter[250246]: ERROR 10:09:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:09:46 localhost openstack_network_exporter[250246]: ERROR 10:09:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:09:46 localhost openstack_network_exporter[250246]: ERROR 10:09:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:09:46 localhost openstack_network_exporter[250246]: Oct 5 06:09:46 localhost openstack_network_exporter[250246]: ERROR 10:09:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:09:46 localhost openstack_network_exporter[250246]: Oct 5 06:09:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "7e56c91d-eb22-4fac-8980-7cbae18ed1d3", "format": "json"}]: dispatch Oct 5 06:09:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e56c91d-eb22-4fac-8980-7cbae18ed1d3, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e56c91d-eb22-4fac-8980-7cbae18ed1d3, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 591 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 152 KiB/s rd, 32 MiB/s wr, 215 op/s Oct 5 06:09:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e198 e198: 6 total, 6 up, 6 in Oct 5 06:09:48 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Oct 5 06:09:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e199 e199: 6 total, 6 up, 6 in Oct 5 06:09:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:49 localhost nova_compute[297130]: 2025-10-05 10:09:49.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 591 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 155 KiB/s rd, 32 MiB/s wr, 219 op/s Oct 5 06:09:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e200 e200: 6 total, 6 up, 6 in Oct 5 06:09:50 localhost nova_compute[297130]: 2025-10-05 10:09:50.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:50 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "73b2b243-157e-4e4f-aa23-56618d37561d", "format": "json"}]: dispatch Oct 5 06:09:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:73b2b243-157e-4e4f-aa23-56618d37561d, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:73b2b243-157e-4e4f-aa23-56618d37561d, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 591 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 126 KiB/s rd, 26 MiB/s wr, 178 op/s Oct 5 06:09:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e201 e201: 6 total, 6 up, 6 in Oct 5 06:09:52 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:09:52 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2063309996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.174 271653 INFO neutron.agent.dhcp.agent [None req-a939a953-f502-4899-aa58-421095146a67 - - - - - -] Synchronizing state#033[00m Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.336 271653 INFO neutron.agent.dhcp.agent [None req-626a54dc-2fa1-4f97-848e-c1d88357eade - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.337 271653 INFO neutron.agent.dhcp.agent [-] Starting network 06e19bd7-eb0a-4bf7-89ef-f8982eff360c dhcp configuration#033[00m Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.338 271653 INFO neutron.agent.dhcp.agent [-] Finished network 06e19bd7-eb0a-4bf7-89ef-f8982eff360c dhcp configuration#033[00m Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.338 271653 INFO neutron.agent.dhcp.agent [None req-626a54dc-2fa1-4f97-848e-c1d88357eade - - - - - -] Synchronizing state complete#033[00m Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.339 271653 INFO neutron.agent.dhcp.agent [None req-e1da064c-81e2-488e-910c-a0d96ca6c94e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:09:52.593 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:09:52 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e202 e202: 6 total, 6 up, 6 in Oct 5 06:09:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 760 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 971 KiB/s rd, 33 MiB/s wr, 306 op/s Oct 5 06:09:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e203 e203: 6 total, 6 up, 6 in Oct 5 06:09:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "40337151-dfe3-4441-aab5-125f008b3b65", "format": "json"}]: dispatch Oct 5 06:09:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:40337151-dfe3-4441-aab5-125f008b3b65, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:40337151-dfe3-4441-aab5-125f008b3b65, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:54 localhost nova_compute[297130]: 2025-10-05 10:09:54.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:09:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:09:54 localhost podman[336358]: 2025-10-05 10:09:54.915324305 +0000 UTC m=+0.080819782 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:09:54 localhost podman[336359]: 2025-10-05 10:09:54.97158565 +0000 UTC m=+0.134066066 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:09:54 localhost podman[336358]: 2025-10-05 10:09:54.982534267 +0000 UTC m=+0.148029744 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:09:54 localhost podman[336359]: 2025-10-05 10:09:54.995342874 +0000 UTC m=+0.157823340 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:09:54 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:09:55 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:09:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e204 e204: 6 total, 6 up, 6 in Oct 5 06:09:55 localhost nova_compute[297130]: 2025-10-05 10:09:55.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 760 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 1.1 MiB/s rd, 40 MiB/s wr, 371 op/s Oct 5 06:09:56 localhost podman[248157]: time="2025-10-05T10:09:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:09:56 localhost podman[248157]: @ - - [05/Oct/2025:10:09:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:09:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e205 e205: 6 total, 6 up, 6 in Oct 5 06:09:56 localhost podman[248157]: @ - - [05/Oct/2025:10:09:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19319 "" "Go-http-client/1.1" Oct 5 06:09:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e206 e206: 6 total, 6 up, 6 in Oct 5 06:09:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 896 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 137 KiB/s rd, 34 MiB/s wr, 200 op/s Oct 5 06:09:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:09:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:09:57 localhost systemd[1]: tmp-crun.owNdg2.mount: Deactivated successfully. Oct 5 06:09:57 localhost podman[336400]: 2025-10-05 10:09:57.94655926 +0000 UTC m=+0.107339822 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible) Oct 5 06:09:57 localhost podman[336400]: 2025-10-05 10:09:57.990319106 +0000 UTC m=+0.151099678 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid) Oct 5 06:09:58 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:09:58 localhost podman[336401]: 2025-10-05 10:09:58.03139339 +0000 UTC m=+0.188072110 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:09:58 localhost podman[336401]: 2025-10-05 10:09:58.069576735 +0000 UTC m=+0.226255455 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:09:58 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:09:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:09:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82", "format": "json"}]: dispatch Oct 5 06:09:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:09:59 localhost nova_compute[297130]: 2025-10-05 10:09:59.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:09:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 896 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 96 KiB/s rd, 24 MiB/s wr, 139 op/s Oct 5 06:10:00 localhost ceph-mon[316511]: overall HEALTH_OK Oct 5 06:10:00 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e207 e207: 6 total, 6 up, 6 in Oct 5 06:10:00 localhost nova_compute[297130]: 2025-10-05 10:10:00.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 896 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 91 KiB/s rd, 23 MiB/s wr, 133 op/s Oct 5 06:10:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e208 e208: 6 total, 6 up, 6 in Oct 5 06:10:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 1.0 GiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 132 KiB/s rd, 39 MiB/s wr, 192 op/s Oct 5 06:10:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:04 localhost nova_compute[297130]: 2025-10-05 10:10:04.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 1.0 GiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 46 KiB/s rd, 17 MiB/s wr, 65 op/s Oct 5 06:10:05 localhost nova_compute[297130]: 2025-10-05 10:10:05.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e209 e209: 6 total, 6 up, 6 in Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.857132) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006857224, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 266, "total_data_size": 1541988, "memory_usage": 1565744, "flush_reason": "Manual Compaction"} Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006865706, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1010148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25409, "largest_seqno": 26545, "table_properties": {"data_size": 1004870, "index_size": 2685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12715, "raw_average_key_size": 21, "raw_value_size": 993825, "raw_average_value_size": 1670, "num_data_blocks": 111, "num_entries": 595, "num_filter_entries": 595, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658966, "oldest_key_time": 1759658966, "file_creation_time": 1759659006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8684 microseconds, and 4964 cpu microseconds. Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.865756) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1010148 bytes OK Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.865852) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.868150) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.868182) EVENT_LOG_v1 {"time_micros": 1759659006868173, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.868209) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1536075, prev total WAL file size 1536075, number of live WAL files 2. Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.869313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323635' seq:72057594037927935, type:22 .. '6C6F676D0034353137' seq:0, type:0; will stop at (end) Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(986KB)], [39(15MB)] Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006869391, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 17459035, "oldest_snapshot_seqno": -1} Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 13144 keys, 16756788 bytes, temperature: kUnknown Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006991186, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 16756788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16682222, "index_size": 40609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 354861, "raw_average_key_size": 26, "raw_value_size": 16458783, "raw_average_value_size": 1252, "num_data_blocks": 1505, "num_entries": 13144, "num_filter_entries": 13144, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759659006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.991578) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 16756788 bytes Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.997000) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 137.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 15.7 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(33.9) write-amplify(16.6) OK, records in: 13697, records dropped: 553 output_compression: NoCompression Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.997041) EVENT_LOG_v1 {"time_micros": 1759659006997023, "job": 22, "event": "compaction_finished", "compaction_time_micros": 121884, "compaction_time_cpu_micros": 48265, "output_level": 6, "num_output_files": 1, "total_output_size": 16756788, "num_input_records": 13697, "num_output_records": 13144, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:10:06 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659006997432, "job": 22, "event": "table_file_deletion", "file_number": 41} Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659007000429, "job": 22, "event": "table_file_deletion", "file_number": 39} Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:06.869198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:07.000547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:07.000555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:07.000559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:07.000562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:07 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:10:07.000566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:10:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 59 KiB/s rd, 34 MiB/s wr, 88 op/s Oct 5 06:10:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "0a3e5f6f-414b-4764-8da1-1f49d452e769", "format": "json"}]: dispatch Oct 5 06:10:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0a3e5f6f-414b-4764-8da1-1f49d452e769, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0a3e5f6f-414b-4764-8da1-1f49d452e769, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:10:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:10:08 localhost podman[336445]: 2025-10-05 10:10:08.92186773 +0000 UTC m=+0.082771074 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:10:08 localhost podman[336445]: 2025-10-05 10:10:08.930374471 +0000 UTC m=+0.091277815 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:10:08 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:10:08 localhost podman[336444]: 2025-10-05 10:10:08.996855474 +0000 UTC m=+0.164007967 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Oct 5 06:10:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:09 localhost podman[336444]: 2025-10-05 10:10:09.033666712 +0000 UTC m=+0.200819185 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:09 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:10:09 localhost nova_compute[297130]: 2025-10-05 10:10:09.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 56 KiB/s rd, 32 MiB/s wr, 83 op/s Oct 5 06:10:10 localhost nova_compute[297130]: 2025-10-05 10:10:10.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:10:10 localhost podman[336483]: 2025-10-05 10:10:10.923025617 +0000 UTC m=+0.088067849 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7) Oct 5 06:10:10 localhost podman[336483]: 2025-10-05 10:10:10.964341367 +0000 UTC m=+0.129383569 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.openshift.expose-services=) Oct 5 06:10:10 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:10:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:10:11 Oct 5 06:10:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:10:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:10:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['.mgr', 'vms', 'manila_metadata', 'backups', 'manila_data', 'images', 'volumes'] Oct 5 06:10:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:10:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:10:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:10:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:10:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:10:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:10:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:10:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 45 KiB/s rd, 26 MiB/s wr, 67 op/s Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065823911222780565 of space, bias 1.0, pg target 1.3164782244556112 quantized to 32 (current 32) Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014871994521217196 of space, bias 1.0, pg target 0.2959526909722222 quantized to 32 (current 32) Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.06627647787548856 of space, bias 1.0, pg target 13.189019097222223 quantized to 32 (current 32) Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.0709275544388615e-05 quantized to 32 (current 32) Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.0709275544388615e-05 quantized to 32 (current 32) Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:10:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.961875348967058e-05 of space, bias 4.0, pg target 0.03691635259631491 quantized to 16 (current 16) Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:10:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:10:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "659c02a3-e91b-4850-bd9a-098959ca9f7c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/659c02a3-e91b-4850-bd9a-098959ca9f7c/.meta.tmp' Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/659c02a3-e91b-4850-bd9a-098959ca9f7c/.meta.tmp' to config b'/volumes/_nogroup/659c02a3-e91b-4850-bd9a-098959ca9f7c/.meta' Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "659c02a3-e91b-4850-bd9a-098959ca9f7c", "format": "json"}]: dispatch Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "0a3e5f6f-414b-4764-8da1-1f49d452e769_2e310f5b-0185-4614-8406-923681f77613", "force": true, "format": "json"}]: dispatch Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0a3e5f6f-414b-4764-8da1-1f49d452e769_2e310f5b-0185-4614-8406-923681f77613, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0a3e5f6f-414b-4764-8da1-1f49d452e769_2e310f5b-0185-4614-8406-923681f77613, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "0a3e5f6f-414b-4764-8da1-1f49d452e769", "force": true, "format": "json"}]: dispatch Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0a3e5f6f-414b-4764-8da1-1f49d452e769, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:10:12 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/550656982' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:10:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0a3e5f6f-414b-4764-8da1-1f49d452e769, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e210 e210: 6 total, 6 up, 6 in Oct 5 06:10:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 1.3 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 24 KiB/s rd, 32 MiB/s wr, 42 op/s Oct 5 06:10:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:10:14 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/237115612' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:10:14 localhost nova_compute[297130]: 2025-10-05 10:10:14.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e211 e211: 6 total, 6 up, 6 in Oct 5 06:10:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82_fe037114-4f24-40fb-a6dd-e68688349d57", "force": true, "format": "json"}]: dispatch Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82_fe037114-4f24-40fb-a6dd-e68688349d57, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82_fe037114-4f24-40fb-a6dd-e68688349d57, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82", "force": true, "format": "json"}]: dispatch Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:15 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Oct 5 06:10:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d35dcd55-4c1a-4d9a-8dec-47c8a9a9fa82, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 1.3 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 14 KiB/s rd, 17 MiB/s wr, 24 op/s Oct 5 06:10:15 localhost nova_compute[297130]: 2025-10-05 10:10:15.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e212 e212: 6 total, 6 up, 6 in Oct 5 06:10:16 localhost openstack_network_exporter[250246]: ERROR 10:10:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:10:16 localhost openstack_network_exporter[250246]: ERROR 10:10:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:16 localhost openstack_network_exporter[250246]: ERROR 10:10:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:16 localhost openstack_network_exporter[250246]: ERROR 10:10:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:10:16 localhost openstack_network_exporter[250246]: Oct 5 06:10:16 localhost openstack_network_exporter[250246]: ERROR 10:10:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:10:16 localhost openstack_network_exporter[250246]: Oct 5 06:10:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:10:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "659c02a3-e91b-4850-bd9a-098959ca9f7c", "format": "json"}]: dispatch Oct 5 06:10:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:16 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '659c02a3-e91b-4850-bd9a-098959ca9f7c' of type subvolume Oct 5 06:10:16 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:16.897+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '659c02a3-e91b-4850-bd9a-098959ca9f7c' of type subvolume Oct 5 06:10:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "659c02a3-e91b-4850-bd9a-098959ca9f7c", "force": true, "format": "json"}]: dispatch Oct 5 06:10:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, vol_name:cephfs) < "" Oct 5 06:10:16 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/659c02a3-e91b-4850-bd9a-098959ca9f7c'' moved to trashcan Oct 5 06:10:16 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:659c02a3-e91b-4850-bd9a-098959ca9f7c, vol_name:cephfs) < "" Oct 5 06:10:16 localhost podman[336502]: 2025-10-05 10:10:16.943115137 +0000 UTC m=+0.085734576 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent) Oct 5 06:10:16 localhost podman[336502]: 2025-10-05 10:10:16.952320826 +0000 UTC m=+0.094940245 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:10:16 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:10:17 localhost nova_compute[297130]: 2025-10-05 10:10:17.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' Oct 5 06:10:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta' Oct 5 06:10:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "format": "json"}]: dispatch Oct 5 06:10:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 272 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 24 MiB/s wr, 188 op/s Oct 5 06:10:18 localhost nova_compute[297130]: 2025-10-05 10:10:18.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Oct 5 06:10:18 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3064017493' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Oct 5 06:10:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "40337151-dfe3-4441-aab5-125f008b3b65_a9b5fda1-ef1d-4c92-b32e-909b2db70237", "force": true, "format": "json"}]: dispatch Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40337151-dfe3-4441-aab5-125f008b3b65_a9b5fda1-ef1d-4c92-b32e-909b2db70237, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40337151-dfe3-4441-aab5-125f008b3b65_a9b5fda1-ef1d-4c92-b32e-909b2db70237, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "40337151-dfe3-4441-aab5-125f008b3b65", "force": true, "format": "json"}]: dispatch Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40337151-dfe3-4441-aab5-125f008b3b65, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:40337151-dfe3-4441-aab5-125f008b3b65, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e213 e213: 6 total, 6 up, 6 in Oct 5 06:10:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:10:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1846599075' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:10:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:10:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1846599075' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:10:19 localhost nova_compute[297130]: 2025-10-05 10:10:19.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 272 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 1.4 MiB/s wr, 155 op/s Oct 5 06:10:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e214 e214: 6 total, 6 up, 6 in Oct 5 06:10:20 localhost nova_compute[297130]: 2025-10-05 10:10:20.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:20 localhost nova_compute[297130]: 2025-10-05 10:10:20.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:10:20 localhost nova_compute[297130]: 2025-10-05 10:10:20.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:10:20 localhost nova_compute[297130]: 2025-10-05 10:10:20.302 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:10:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:20.407 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:10:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:10:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:10:20 localhost nova_compute[297130]: 2025-10-05 10:10:20.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "snap_name": "f439cebd-cde4-4d86-b3a8-b50b3dd7567d", "format": "json"}]: dispatch Oct 5 06:10:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f439cebd-cde4-4d86-b3a8-b50b3dd7567d, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f439cebd-cde4-4d86-b3a8-b50b3dd7567d, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e215 e215: 6 total, 6 up, 6 in Oct 5 06:10:21 localhost systemd[1]: tmp-crun.RVDFfm.mount: Deactivated successfully. Oct 5 06:10:21 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:10:21 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:10:21 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:10:21 localhost podman[336535]: 2025-10-05 10:10:21.226165044 +0000 UTC m=+0.076406613 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.295 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.296 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.297 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.297 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:10:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 272 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 1.4 MiB/s wr, 163 op/s Oct 5 06:10:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:10:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2688883417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:10:21 localhost nova_compute[297130]: 2025-10-05 10:10:21.822 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:10:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e216 e216: 6 total, 6 up, 6 in Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.024 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.026 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11550MB free_disk=41.700164794921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.026 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.026 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:10:22 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "73b2b243-157e-4e4f-aa23-56618d37561d_d9cf138e-25a7-4617-b6df-94de033d3ae3", "force": true, "format": "json"}]: dispatch Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73b2b243-157e-4e4f-aa23-56618d37561d_d9cf138e-25a7-4617-b6df-94de033d3ae3, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73b2b243-157e-4e4f-aa23-56618d37561d_d9cf138e-25a7-4617-b6df-94de033d3ae3, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:22 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "73b2b243-157e-4e4f-aa23-56618d37561d", "force": true, "format": "json"}]: dispatch Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73b2b243-157e-4e4f-aa23-56618d37561d, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.095 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.095 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.133 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:10:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:73b2b243-157e-4e4f-aa23-56618d37561d, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:22.477 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:22 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:22.480 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:10:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:10:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1581446410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.633 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.639 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.661 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.664 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:10:22 localhost nova_compute[297130]: 2025-10-05 10:10:22.664 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:10:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e217 e217: 6 total, 6 up, 6 in Oct 5 06:10:23 localhost nova_compute[297130]: 2025-10-05 10:10:23.665 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:23 localhost nova_compute[297130]: 2025-10-05 10:10:23.666 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 319 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 137 KiB/s rd, 5.4 MiB/s wr, 209 op/s Oct 5 06:10:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e218 e218: 6 total, 6 up, 6 in Oct 5 06:10:24 localhost nova_compute[297130]: 2025-10-05 10:10:24.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "snap_name": "a64fa40b-711e-4555-90c3-1cd41e948fff", "format": "json"}]: dispatch Oct 5 06:10:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a64fa40b-711e-4555-90c3-1cd41e948fff, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a64fa40b-711e-4555-90c3-1cd41e948fff, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:10:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:10:25 localhost podman[336602]: 2025-10-05 10:10:25.160993728 +0000 UTC m=+0.089693263 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:10:25 localhost podman[336602]: 2025-10-05 10:10:25.173258011 +0000 UTC m=+0.101957576 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:10:25 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:10:25 localhost systemd[1]: tmp-crun.lvGhum.mount: Deactivated successfully. Oct 5 06:10:25 localhost podman[336601]: 2025-10-05 10:10:25.216487302 +0000 UTC m=+0.149308759 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS) Oct 5 06:10:25 localhost podman[336601]: 2025-10-05 10:10:25.254416871 +0000 UTC m=+0.187238328 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:10:25 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:10:25 localhost nova_compute[297130]: 2025-10-05 10:10:25.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "7e56c91d-eb22-4fac-8980-7cbae18ed1d3_3e1c355f-9836-46f1-b347-04d9c4f532bf", "force": true, "format": "json"}]: dispatch Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e56c91d-eb22-4fac-8980-7cbae18ed1d3_3e1c355f-9836-46f1-b347-04d9c4f532bf, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e56c91d-eb22-4fac-8980-7cbae18ed1d3_3e1c355f-9836-46f1-b347-04d9c4f532bf, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "snap_name": "7e56c91d-eb22-4fac-8980-7cbae18ed1d3", "force": true, "format": "json"}]: dispatch Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e56c91d-eb22-4fac-8980-7cbae18ed1d3, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta.tmp' to config b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8/.meta' Oct 5 06:10:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e56c91d-eb22-4fac-8980-7cbae18ed1d3, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 319 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 120 KiB/s rd, 4.7 MiB/s wr, 183 op/s Oct 5 06:10:25 localhost nova_compute[297130]: 2025-10-05 10:10:25.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:26 localhost podman[248157]: time="2025-10-05T10:10:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:10:26 localhost podman[248157]: @ - - [05/Oct/2025:10:10:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:10:26 localhost podman[248157]: @ - - [05/Oct/2025:10:10:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19344 "" "Go-http-client/1.1" Oct 5 06:10:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e219 e219: 6 total, 6 up, 6 in Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.340 271653 INFO neutron.agent.dhcp.agent [None req-626a54dc-2fa1-4f97-848e-c1d88357eade - - - - - -] Synchronizing state#033[00m Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.466 271653 INFO neutron.agent.dhcp.agent [None req-e24b71c2-ab87-408b-8da8-511d2612c270 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.467 271653 INFO neutron.agent.dhcp.agent [-] Starting network a95d87a7-85be-4c88-bd5e-3597ee84782d dhcp configuration#033[00m Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.468 271653 INFO neutron.agent.dhcp.agent [-] Finished network a95d87a7-85be-4c88-bd5e-3597ee84782d dhcp configuration#033[00m Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.468 271653 INFO neutron.agent.dhcp.agent [None req-e24b71c2-ab87-408b-8da8-511d2612c270 - - - - - -] Synchronizing state complete#033[00m Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.469 271653 INFO neutron.agent.dhcp.agent [None req-9fd91869-0206-4663-86e9-63f5eedc081e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:26 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:26.482 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:10:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e220 e220: 6 total, 6 up, 6 in Oct 5 06:10:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:26.961 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:10:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 272 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 122 KiB/s rd, 33 KiB/s wr, 174 op/s Oct 5 06:10:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "snap_name": "a64fa40b-711e-4555-90c3-1cd41e948fff_e60a05f7-c824-477d-b905-d6505f971c1f", "force": true, "format": "json"}]: dispatch Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a64fa40b-711e-4555-90c3-1cd41e948fff_e60a05f7-c824-477d-b905-d6505f971c1f, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta' Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a64fa40b-711e-4555-90c3-1cd41e948fff_e60a05f7-c824-477d-b905-d6505f971c1f, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "snap_name": "a64fa40b-711e-4555-90c3-1cd41e948fff", "force": true, "format": "json"}]: dispatch Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a64fa40b-711e-4555-90c3-1cd41e948fff, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:28 localhost nova_compute[297130]: 2025-10-05 10:10:28.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:10:28 localhost nova_compute[297130]: 2025-10-05 10:10:28.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta' Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a64fa40b-711e-4555-90c3-1cd41e948fff, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e221 e221: 6 total, 6 up, 6 in Oct 5 06:10:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "format": "json"}]: dispatch Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:28 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:28.576+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab5beba3-8fac-4d3a-a840-1367d17e7bf8' of type subvolume Oct 5 06:10:28 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab5beba3-8fac-4d3a-a840-1367d17e7bf8' of type subvolume Oct 5 06:10:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab5beba3-8fac-4d3a-a840-1367d17e7bf8", "force": true, "format": "json"}]: dispatch Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ab5beba3-8fac-4d3a-a840-1367d17e7bf8'' moved to trashcan Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab5beba3-8fac-4d3a-a840-1367d17e7bf8, vol_name:cephfs) < "" Oct 5 06:10:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:10:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:10:28 localhost podman[336642]: 2025-10-05 10:10:28.908911952 +0000 UTC m=+0.075581081 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:10:28 localhost podman[336643]: 2025-10-05 10:10:28.959098662 +0000 UTC m=+0.121038632 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Oct 5 06:10:28 localhost podman[336642]: 2025-10-05 10:10:28.991259155 +0000 UTC m=+0.157928294 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:10:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:29 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:10:29 localhost podman[336643]: 2025-10-05 10:10:29.074896462 +0000 UTC m=+0.236836412 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:10:29 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:10:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e222 e222: 6 total, 6 up, 6 in Oct 5 06:10:29 localhost nova_compute[297130]: 2025-10-05 10:10:29.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 272 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 138 KiB/s rd, 37 KiB/s wr, 196 op/s Oct 5 06:10:30 localhost nova_compute[297130]: 2025-10-05 10:10:30.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1397eb45-1814-4932-aaad-e4efd0de2e1a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e223 e223: 6 total, 6 up, 6 in Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1397eb45-1814-4932-aaad-e4efd0de2e1a/.meta.tmp' Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1397eb45-1814-4932-aaad-e4efd0de2e1a/.meta.tmp' to config b'/volumes/_nogroup/1397eb45-1814-4932-aaad-e4efd0de2e1a/.meta' Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1397eb45-1814-4932-aaad-e4efd0de2e1a", "format": "json"}]: dispatch Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "snap_name": "f439cebd-cde4-4d86-b3a8-b50b3dd7567d_316ad520-c570-4aab-98b0-fef387acec53", "force": true, "format": "json"}]: dispatch Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f439cebd-cde4-4d86-b3a8-b50b3dd7567d_316ad520-c570-4aab-98b0-fef387acec53, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta' Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f439cebd-cde4-4d86-b3a8-b50b3dd7567d_316ad520-c570-4aab-98b0-fef387acec53, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "snap_name": "f439cebd-cde4-4d86-b3a8-b50b3dd7567d", "force": true, "format": "json"}]: dispatch Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f439cebd-cde4-4d86-b3a8-b50b3dd7567d, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta.tmp' to config b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb/.meta' Oct 5 06:10:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f439cebd-cde4-4d86-b3a8-b50b3dd7567d, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 272 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 113 KiB/s rd, 31 KiB/s wr, 160 op/s Oct 5 06:10:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e224 e224: 6 total, 6 up, 6 in Oct 5 06:10:32 localhost neutron_sriov_agent[264647]: 2025-10-05 10:10:32.742 2 INFO neutron.agent.securitygroups_rpc [req-c108df62-5349-477f-8022-dc93d2a68330 req-724fb1dc-04d4-4b1f-a5ff-0a795b0c07df f780144ddebc407da5a029259c3265a6 1c8daf35e79847329bde1c6cf0340477 - - default default] Security group member updated ['d9126934-1777-40de-b348-3975c8158884']#033[00m Oct 5 06:10:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "3f76ad6a-0488-462d-b26e-f8a99e40b931", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:3f76ad6a-0488-462d-b26e-f8a99e40b931, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:3f76ad6a-0488-462d-b26e-f8a99e40b931, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e225 e225: 6 total, 6 up, 6 in Oct 5 06:10:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 193 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 202 KiB/s rd, 89 KiB/s wr, 287 op/s Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:10:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e226 e226: 6 total, 6 up, 6 in Oct 5 06:10:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:34 localhost nova_compute[297130]: 2025-10-05 10:10:34.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "format": "json"}]: dispatch Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:34 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:34.818+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cd3d054-6169-48d7-af99-937dbebe5bbb' of type subvolume Oct 5 06:10:34 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cd3d054-6169-48d7-af99-937dbebe5bbb' of type subvolume Oct 5 06:10:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cd3d054-6169-48d7-af99-937dbebe5bbb", "force": true, "format": "json"}]: dispatch Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9cd3d054-6169-48d7-af99-937dbebe5bbb'' moved to trashcan Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cd3d054-6169-48d7-af99-937dbebe5bbb, vol_name:cephfs) < "" Oct 5 06:10:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6e586298-e813-44ef-b145-12e210b50e50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e586298-e813-44ef-b145-12e210b50e50, vol_name:cephfs) < "" Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6e586298-e813-44ef-b145-12e210b50e50/.meta.tmp' Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e586298-e813-44ef-b145-12e210b50e50/.meta.tmp' to config b'/volumes/_nogroup/6e586298-e813-44ef-b145-12e210b50e50/.meta' Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e586298-e813-44ef-b145-12e210b50e50, vol_name:cephfs) < "" Oct 5 06:10:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e586298-e813-44ef-b145-12e210b50e50", "format": "json"}]: dispatch Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e586298-e813-44ef-b145-12e210b50e50, vol_name:cephfs) < "" Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e586298-e813-44ef-b145-12e210b50e50, vol_name:cephfs) < "" Oct 5 06:10:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:10:35 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:10:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:10:35 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:10:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:10:35 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev d6550797-6ca2-4106-b1f7-4404883f48e3 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:10:35 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev d6550797-6ca2-4106-b1f7-4404883f48e3 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:10:35 localhost ceph-mgr[301363]: [progress INFO root] Completed event d6550797-6ca2-4106-b1f7-4404883f48e3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:10:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:10:35 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:10:35 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:10:35 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e227 e227: 6 total, 6 up, 6 in Oct 5 06:10:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:35.593 271653 INFO neutron.agent.linux.ip_lib [None req-a8629425-c0c8-44d3-ba21-b5ef6061a869 - - - - - -] Device tap94455a6e-6e cannot be used as it has no MAC address#033[00m Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost kernel: device tap94455a6e-6e entered promiscuous mode Oct 5 06:10:35 localhost NetworkManager[5970]: [1759659035.6980] manager: (tap94455a6e-6e): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost ovn_controller[157556]: 2025-10-05T10:10:35Z|00188|binding|INFO|Claiming lport 94455a6e-6e51-457c-b9d3-7bd8c197724b for this chassis. Oct 5 06:10:35 localhost ovn_controller[157556]: 2025-10-05T10:10:35Z|00189|binding|INFO|94455a6e-6e51-457c-b9d3-7bd8c197724b: Claiming unknown Oct 5 06:10:35 localhost systemd-udevd[336837]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:10:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:35.717 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-05757a6b-9b26-4a13-be7f-b2caf49ea98e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05757a6b-9b26-4a13-be7f-b2caf49ea98e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36bd7039b7af4b7db5db22e101f63a40', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbfb02f9-67dd-4151-a6e6-d08f182fadd3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=94455a6e-6e51-457c-b9d3-7bd8c197724b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:35.719 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 94455a6e-6e51-457c-b9d3-7bd8c197724b in datapath 05757a6b-9b26-4a13-be7f-b2caf49ea98e bound to our chassis#033[00m Oct 5 06:10:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:35.721 163201 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 05757a6b-9b26-4a13-be7f-b2caf49ea98e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Oct 5 06:10:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:35.722 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[9055a3f6-7563-4c7e-b44b-6084e91b2133]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost ovn_controller[157556]: 2025-10-05T10:10:35Z|00190|binding|INFO|Setting lport 94455a6e-6e51-457c-b9d3-7bd8c197724b ovn-installed in OVS Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost ovn_controller[157556]: 2025-10-05T10:10:35Z|00191|binding|INFO|Setting lport 94455a6e-6e51-457c-b9d3-7bd8c197724b up in Southbound Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost journal[237639]: ethtool ioctl error on tap94455a6e-6e: No such device Oct 5 06:10:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 193 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 222 KiB/s rd, 97 KiB/s wr, 316 op/s Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost nova_compute[297130]: 2025-10-05 10:10:35.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "3f76ad6a-0488-462d-b26e-f8a99e40b931", "force": true, "format": "json"}]: dispatch Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:3f76ad6a-0488-462d-b26e-f8a99e40b931, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:3f76ad6a-0488-462d-b26e-f8a99e40b931, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e228 e228: 6 total, 6 up, 6 in Oct 5 06:10:36 localhost podman[336906]: Oct 5 06:10:36 localhost podman[336906]: 2025-10-05 10:10:36.759608486 +0000 UTC m=+0.096584190 container create 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:10:36 localhost systemd[1]: Started libpod-conmon-5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634.scope. Oct 5 06:10:36 localhost podman[336906]: 2025-10-05 10:10:36.710749701 +0000 UTC m=+0.047725485 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:10:36 localhost systemd[1]: Started libcrun container. Oct 5 06:10:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267c42237fa4de54f67518ae38c1aefd3095d4b32214efb7184780c15c1c3634/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:10:36 localhost podman[336906]: 2025-10-05 10:10:36.851386964 +0000 UTC m=+0.188362668 container init 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001) Oct 5 06:10:36 localhost podman[336906]: 2025-10-05 10:10:36.860336867 +0000 UTC m=+0.197312571 container start 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:10:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e229 e229: 6 total, 6 up, 6 in Oct 5 06:10:36 localhost dnsmasq[336925]: started, version 2.85 cachesize 150 Oct 5 06:10:36 localhost dnsmasq[336925]: DNS service limited to local subnets Oct 5 06:10:36 localhost dnsmasq[336925]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:10:36 localhost dnsmasq[336925]: warning: no upstream servers configured Oct 5 06:10:36 localhost dnsmasq-dhcp[336925]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:10:36 localhost dnsmasq[336925]: read /var/lib/neutron/dhcp/05757a6b-9b26-4a13-be7f-b2caf49ea98e/addn_hosts - 0 addresses Oct 5 06:10:36 localhost dnsmasq-dhcp[336925]: read /var/lib/neutron/dhcp/05757a6b-9b26-4a13-be7f-b2caf49ea98e/host Oct 5 06:10:36 localhost dnsmasq-dhcp[336925]: read /var/lib/neutron/dhcp/05757a6b-9b26-4a13-be7f-b2caf49ea98e/opts Oct 5 06:10:36 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:10:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:10:36 localhost ovn_controller[157556]: 2025-10-05T10:10:36Z|00192|binding|INFO|Removing iface tap94455a6e-6e ovn-installed in OVS Oct 5 06:10:36 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:36.961 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 210d5c35-ecd7-4cf8-bc4e-ca516d4011d2 with type ""#033[00m Oct 5 06:10:36 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:36.962 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-05757a6b-9b26-4a13-be7f-b2caf49ea98e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05757a6b-9b26-4a13-be7f-b2caf49ea98e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36bd7039b7af4b7db5db22e101f63a40', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbfb02f9-67dd-4151-a6e6-d08f182fadd3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=94455a6e-6e51-457c-b9d3-7bd8c197724b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:10:36 localhost ovn_controller[157556]: 2025-10-05T10:10:36Z|00193|binding|INFO|Removing lport 94455a6e-6e51-457c-b9d3-7bd8c197724b ovn-installed in OVS Oct 5 06:10:36 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:36.964 163201 INFO neutron.agent.ovn.metadata.agent [-] Port 94455a6e-6e51-457c-b9d3-7bd8c197724b in datapath 05757a6b-9b26-4a13-be7f-b2caf49ea98e unbound from our chassis#033[00m Oct 5 06:10:36 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:36.965 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05757a6b-9b26-4a13-be7f-b2caf49ea98e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:10:36 localhost nova_compute[297130]: 2025-10-05 10:10:36.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:36 localhost ovn_metadata_agent[163196]: 2025-10-05 10:10:36.966 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[53e2d663-3a03-49de-90be-f18d96643a1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:10:36 localhost nova_compute[297130]: 2025-10-05 10:10:36.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:36 localhost kernel: device tap94455a6e-6e left promiscuous mode Oct 5 06:10:36 localhost nova_compute[297130]: 2025-10-05 10:10:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:36 localhost nova_compute[297130]: 2025-10-05 10:10:36.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.046 271653 INFO neutron.agent.dhcp.agent [None req-c9330545-a411-4f34-891b-ede8049c2aef - - - - - -] DHCP configuration for ports {'c06cd425-b802-4f80-9e14-fe0eef99a34e'} is completed#033[00m Oct 5 06:10:37 localhost dnsmasq[336925]: read /var/lib/neutron/dhcp/05757a6b-9b26-4a13-be7f-b2caf49ea98e/addn_hosts - 0 addresses Oct 5 06:10:37 localhost dnsmasq-dhcp[336925]: read /var/lib/neutron/dhcp/05757a6b-9b26-4a13-be7f-b2caf49ea98e/host Oct 5 06:10:37 localhost podman[336945]: 2025-10-05 10:10:37.298343522 +0000 UTC m=+0.059885274 container kill 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:10:37 localhost dnsmasq-dhcp[336925]: read /var/lib/neutron/dhcp/05757a6b-9b26-4a13-be7f-b2caf49ea98e/opts Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent [None req-621180a9-530d-4e91-b82a-93951fa34738 - - - - - -] Unable to reload_allocations dhcp for 05757a6b-9b26-4a13-be7f-b2caf49ea98e.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap94455a6e-6e not found in namespace qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e. Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent return fut.result() Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent return self.__get_result() Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent raise self._exception Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap94455a6e-6e not found in namespace qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e. Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.326 271653 ERROR neutron.agent.dhcp.agent #033[00m Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.330 271653 INFO neutron.agent.dhcp.agent [None req-e24b71c2-ab87-408b-8da8-511d2612c270 - - - - - -] Synchronizing state#033[00m Oct 5 06:10:37 localhost nova_compute[297130]: 2025-10-05 10:10:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.515 271653 INFO neutron.agent.dhcp.agent [None req-5d307746-82a4-4d84-81c6-99cc8baadf01 - - - - - -] All active networks have been fetched through RPC.#033[00m Oct 5 06:10:37 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:10:37 localhost dnsmasq[336925]: exiting on receipt of SIGTERM Oct 5 06:10:37 localhost systemd[1]: libpod-5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634.scope: Deactivated successfully. Oct 5 06:10:37 localhost podman[336975]: 2025-10-05 10:10:37.672710023 +0000 UTC m=+0.061770817 container kill 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001) Oct 5 06:10:37 localhost podman[336990]: 2025-10-05 10:10:37.743187703 +0000 UTC m=+0.057549121 container died 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:10:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 193 MiB data, 1005 MiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 54 KiB/s wr, 130 op/s Oct 5 06:10:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634-userdata-shm.mount: Deactivated successfully. Oct 5 06:10:37 localhost systemd[1]: var-lib-containers-storage-overlay-267c42237fa4de54f67518ae38c1aefd3095d4b32214efb7184780c15c1c3634-merged.mount: Deactivated successfully. Oct 5 06:10:37 localhost podman[336990]: 2025-10-05 10:10:37.799700156 +0000 UTC m=+0.114061534 container cleanup 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:10:37 localhost systemd[1]: libpod-conmon-5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634.scope: Deactivated successfully. Oct 5 06:10:37 localhost podman[336991]: 2025-10-05 10:10:37.895281157 +0000 UTC m=+0.196677683 container remove 5e49935775adec194c8c85bc332817369f44352b8e7a0adc2ad5a25991e49634 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05757a6b-9b26-4a13-be7f-b2caf49ea98e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:10:37 localhost systemd[1]: run-netns-qdhcp\x2d05757a6b\x2d9b26\x2d4a13\x2dbe7f\x2db2caf49ea98e.mount: Deactivated successfully. Oct 5 06:10:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:10:37.925 271653 INFO neutron.agent.dhcp.agent [None req-0ea8e357-ab93-4d7f-8860-efdc854ccf6c - - - - - -] Synchronizing state complete#033[00m Oct 5 06:10:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e586298-e813-44ef-b145-12e210b50e50", "format": "json"}]: dispatch Oct 5 06:10:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e586298-e813-44ef-b145-12e210b50e50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e586298-e813-44ef-b145-12e210b50e50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:38 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:38.594+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e586298-e813-44ef-b145-12e210b50e50' of type subvolume Oct 5 06:10:38 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e586298-e813-44ef-b145-12e210b50e50' of type subvolume Oct 5 06:10:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e586298-e813-44ef-b145-12e210b50e50", "force": true, "format": "json"}]: dispatch Oct 5 06:10:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e586298-e813-44ef-b145-12e210b50e50, vol_name:cephfs) < "" Oct 5 06:10:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6e586298-e813-44ef-b145-12e210b50e50'' moved to trashcan Oct 5 06:10:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e586298-e813-44ef-b145-12e210b50e50, vol_name:cephfs) < "" Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:10:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:10:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "69edd6b3-73f3-4e1d-a403-3da14f74e42e", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:69edd6b3-73f3-4e1d-a403-3da14f74e42e, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:69edd6b3-73f3-4e1d-a403-3da14f74e42e, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:39 localhost nova_compute[297130]: 2025-10-05 10:10:39.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 193 MiB data, 1005 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 41 KiB/s wr, 98 op/s Oct 5 06:10:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61112788-f7d7-42e3-86ad-1ffeef198fb4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, vol_name:cephfs) < "" Oct 5 06:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:10:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61112788-f7d7-42e3-86ad-1ffeef198fb4/.meta.tmp' Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61112788-f7d7-42e3-86ad-1ffeef198fb4/.meta.tmp' to config b'/volumes/_nogroup/61112788-f7d7-42e3-86ad-1ffeef198fb4/.meta' Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, vol_name:cephfs) < "" Oct 5 06:10:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61112788-f7d7-42e3-86ad-1ffeef198fb4", "format": "json"}]: dispatch Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, vol_name:cephfs) < "" Oct 5 06:10:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, vol_name:cephfs) < "" Oct 5 06:10:39 localhost podman[337021]: 2025-10-05 10:10:39.933625162 +0000 UTC m=+0.089522169 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:10:39 localhost podman[337021]: 2025-10-05 10:10:39.948140445 +0000 UTC m=+0.104037442 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:10:39 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:10:40 localhost podman[337020]: 2025-10-05 10:10:40.038896075 +0000 UTC m=+0.195411109 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:10:40 localhost podman[337020]: 2025-10-05 10:10:40.07818459 +0000 UTC m=+0.234699634 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:10:40 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:10:40 localhost podman[337076]: 2025-10-05 10:10:40.395400481 +0000 UTC m=+0.062526996 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Oct 5 06:10:40 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:10:40 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:10:40 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:10:40 localhost nova_compute[297130]: 2025-10-05 10:10:40.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:40 localhost nova_compute[297130]: 2025-10-05 10:10:40.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:10:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 193 MiB data, 1005 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 35 KiB/s wr, 83 op/s Oct 5 06:10:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:10:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e230 e230: 6 total, 6 up, 6 in Oct 5 06:10:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1397eb45-1814-4932-aaad-e4efd0de2e1a", "format": "json"}]: dispatch Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:41.881+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1397eb45-1814-4932-aaad-e4efd0de2e1a' of type subvolume Oct 5 06:10:41 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1397eb45-1814-4932-aaad-e4efd0de2e1a' of type subvolume Oct 5 06:10:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1397eb45-1814-4932-aaad-e4efd0de2e1a", "force": true, "format": "json"}]: dispatch Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, vol_name:cephfs) < "" Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1397eb45-1814-4932-aaad-e4efd0de2e1a'' moved to trashcan Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1397eb45-1814-4932-aaad-e4efd0de2e1a, vol_name:cephfs) < "" Oct 5 06:10:41 localhost podman[337097]: 2025-10-05 10:10:41.921878788 +0000 UTC m=+0.089650142 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:10:41 localhost podman[337097]: 2025-10-05 10:10:41.935964539 +0000 UTC m=+0.103735923 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:10:41 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:10:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "69edd6b3-73f3-4e1d-a403-3da14f74e42e", "force": true, "format": "json"}]: dispatch Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:69edd6b3-73f3-4e1d-a403-3da14f74e42e, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:69edd6b3-73f3-4e1d-a403-3da14f74e42e, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "b2e3c395-4c18-43fc-8351-1fa81c00adb8", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:b2e3c395-4c18-43fc-8351-1fa81c00adb8, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:b2e3c395-4c18-43fc-8351-1fa81c00adb8, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18f67856-fddb-45f1-b005-397ef23a8150", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta.tmp' Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta.tmp' to config b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta' Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18f67856-fddb-45f1-b005-397ef23a8150", "format": "json"}]: dispatch Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 194 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 85 KiB/s wr, 75 op/s Oct 5 06:10:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:44 localhost nova_compute[297130]: 2025-10-05 10:10:44.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b44a2111-d07c-4585-a635-05d39c432bc2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b44a2111-d07c-4585-a635-05d39c432bc2, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b44a2111-d07c-4585-a635-05d39c432bc2/.meta.tmp' Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b44a2111-d07c-4585-a635-05d39c432bc2/.meta.tmp' to config b'/volumes/_nogroup/b44a2111-d07c-4585-a635-05d39c432bc2/.meta' Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b44a2111-d07c-4585-a635-05d39c432bc2, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b44a2111-d07c-4585-a635-05d39c432bc2", "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b44a2111-d07c-4585-a635-05d39c432bc2, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b44a2111-d07c-4585-a635-05d39c432bc2, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61112788-f7d7-42e3-86ad-1ffeef198fb4", "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:45.466+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61112788-f7d7-42e3-86ad-1ffeef198fb4' of type subvolume Oct 5 06:10:45 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61112788-f7d7-42e3-86ad-1ffeef198fb4' of type subvolume Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61112788-f7d7-42e3-86ad-1ffeef198fb4", "force": true, "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/61112788-f7d7-42e3-86ad-1ffeef198fb4'' moved to trashcan Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61112788-f7d7-42e3-86ad-1ffeef198fb4, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "b2e3c395-4c18-43fc-8351-1fa81c00adb8", "force": true, "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:b2e3c395-4c18-43fc-8351-1fa81c00adb8, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:b2e3c395-4c18-43fc-8351-1fa81c00adb8, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 194 MiB data, 1006 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 69 KiB/s wr, 61 op/s Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "ade0e759-68d8-476f-9c0d-4746651aa929", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:ade0e759-68d8-476f-9c0d-4746651aa929, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:45 localhost nova_compute[297130]: 2025-10-05 10:10:45.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:ade0e759-68d8-476f-9c0d-4746651aa929, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18f67856-fddb-45f1-b005-397ef23a8150", "snap_name": "a4dce444-a421-41cf-b113-18b901712557", "format": "json"}]: dispatch Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a4dce444-a421-41cf-b113-18b901712557, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a4dce444-a421-41cf-b113-18b901712557, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:46 localhost openstack_network_exporter[250246]: ERROR 10:10:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:10:46 localhost openstack_network_exporter[250246]: ERROR 10:10:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:46 localhost openstack_network_exporter[250246]: ERROR 10:10:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:10:46 localhost openstack_network_exporter[250246]: ERROR 10:10:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:10:46 localhost openstack_network_exporter[250246]: Oct 5 06:10:46 localhost openstack_network_exporter[250246]: ERROR 10:10:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:10:46 localhost openstack_network_exporter[250246]: Oct 5 06:10:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 194 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 1.6 KiB/s rd, 68 KiB/s wr, 8 op/s Oct 5 06:10:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:10:47 localhost podman[337116]: 2025-10-05 10:10:47.922672264 +0000 UTC m=+0.086495066 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Oct 5 06:10:47 localhost podman[337116]: 2025-10-05 10:10:47.934173527 +0000 UTC m=+0.097996308 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:10:47 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:10:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b44a2111-d07c-4585-a635-05d39c432bc2", "format": "json"}]: dispatch Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b44a2111-d07c-4585-a635-05d39c432bc2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b44a2111-d07c-4585-a635-05d39c432bc2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:48.591+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b44a2111-d07c-4585-a635-05d39c432bc2' of type subvolume Oct 5 06:10:48 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b44a2111-d07c-4585-a635-05d39c432bc2' of type subvolume Oct 5 06:10:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b44a2111-d07c-4585-a635-05d39c432bc2", "force": true, "format": "json"}]: dispatch Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b44a2111-d07c-4585-a635-05d39c432bc2, vol_name:cephfs) < "" Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b44a2111-d07c-4585-a635-05d39c432bc2'' moved to trashcan Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b44a2111-d07c-4585-a635-05d39c432bc2, vol_name:cephfs) < "" Oct 5 06:10:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "ade0e759-68d8-476f-9c0d-4746651aa929", "force": true, "format": "json"}]: dispatch Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:ade0e759-68d8-476f-9c0d-4746651aa929, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:ade0e759-68d8-476f-9c0d-4746651aa929, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 194 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 1.6 KiB/s rd, 68 KiB/s wr, 8 op/s Oct 5 06:10:49 localhost nova_compute[297130]: 2025-10-05 10:10:49.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:50 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18f67856-fddb-45f1-b005-397ef23a8150", "snap_name": "a4dce444-a421-41cf-b113-18b901712557_340c9e28-01ca-4563-9844-16e800f76f4c", "force": true, "format": "json"}]: dispatch Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a4dce444-a421-41cf-b113-18b901712557_340c9e28-01ca-4563-9844-16e800f76f4c, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta.tmp' Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta.tmp' to config b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta' Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a4dce444-a421-41cf-b113-18b901712557_340c9e28-01ca-4563-9844-16e800f76f4c, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:50 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18f67856-fddb-45f1-b005-397ef23a8150", "snap_name": "a4dce444-a421-41cf-b113-18b901712557", "force": true, "format": "json"}]: dispatch Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a4dce444-a421-41cf-b113-18b901712557, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta.tmp' Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta.tmp' to config b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150/.meta' Oct 5 06:10:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a4dce444-a421-41cf-b113-18b901712557, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:50 localhost nova_compute[297130]: 2025-10-05 10:10:50.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 194 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 1.6 KiB/s rd, 68 KiB/s wr, 8 op/s Oct 5 06:10:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "74158447-cee8-4cd9-8edb-c3b0c87276b7", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:74158447-cee8-4cd9-8edb-c3b0c87276b7, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:74158447-cee8-4cd9-8edb-c3b0c87276b7, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:10:53 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18f67856-fddb-45f1-b005-397ef23a8150", "format": "json"}]: dispatch Oct 5 06:10:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18f67856-fddb-45f1-b005-397ef23a8150, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18f67856-fddb-45f1-b005-397ef23a8150, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:10:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:10:53.394+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18f67856-fddb-45f1-b005-397ef23a8150' of type subvolume Oct 5 06:10:53 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18f67856-fddb-45f1-b005-397ef23a8150' of type subvolume Oct 5 06:10:53 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18f67856-fddb-45f1-b005-397ef23a8150", "force": true, "format": "json"}]: dispatch Oct 5 06:10:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:53 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18f67856-fddb-45f1-b005-397ef23a8150'' moved to trashcan Oct 5 06:10:53 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:10:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18f67856-fddb-45f1-b005-397ef23a8150, vol_name:cephfs) < "" Oct 5 06:10:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 86 KiB/s wr, 38 op/s Oct 5 06:10:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:54 localhost nova_compute[297130]: 2025-10-05 10:10:54.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e231 e231: 6 total, 6 up, 6 in Oct 5 06:10:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "74158447-cee8-4cd9-8edb-c3b0c87276b7", "force": true, "format": "json"}]: dispatch Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:74158447-cee8-4cd9-8edb-c3b0c87276b7, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:74158447-cee8-4cd9-8edb-c3b0c87276b7, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:10:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/.meta.tmp' Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/.meta.tmp' to config b'/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/.meta' Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:10:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "format": "json"}]: dispatch Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:10:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:10:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 62 KiB/s wr, 43 op/s Oct 5 06:10:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:10:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:10:55 localhost nova_compute[297130]: 2025-10-05 10:10:55.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:55 localhost podman[337134]: 2025-10-05 10:10:55.96971613 +0000 UTC m=+0.136429430 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:10:55 localhost podman[337133]: 2025-10-05 10:10:55.943541701 +0000 UTC m=+0.113492919 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:10:56 localhost podman[337134]: 2025-10-05 10:10:56.002821218 +0000 UTC m=+0.169534458 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:10:56 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:10:56 localhost podman[248157]: time="2025-10-05T10:10:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:10:56 localhost podman[337133]: 2025-10-05 10:10:56.073319839 +0000 UTC m=+0.243271097 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Oct 5 06:10:56 localhost podman[248157]: @ - - [05/Oct/2025:10:10:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:10:56 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:10:56 localhost podman[248157]: @ - - [05/Oct/2025:10:10:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19366 "" "Go-http-client/1.1" Oct 5 06:10:56 localhost sshd[337176]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:10:56 localhost sshd[337177]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:10:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 58 KiB/s wr, 55 op/s Oct 5 06:10:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:10:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 58 KiB/s wr, 55 op/s Oct 5 06:10:59 localhost nova_compute[297130]: 2025-10-05 10:10:59.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:10:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:10:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:10:59 localhost podman[337178]: 2025-10-05 10:10:59.943334555 +0000 UTC m=+0.084061180 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 06:10:59 localhost podman[337178]: 2025-10-05 10:10:59.95234535 +0000 UTC m=+0.093071935 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid) Oct 5 06:10:59 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:10:59 localhost podman[337179]: 2025-10-05 10:10:59.998449449 +0000 UTC m=+0.135288238 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:00 localhost podman[337179]: 2025-10-05 10:11:00.06228159 +0000 UTC m=+0.199120369 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller) Oct 5 06:11:00 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:11:00 localhost nova_compute[297130]: 2025-10-05 10:11:00.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:01 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/.meta.tmp' Oct 5 06:11:01 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/.meta.tmp' to config b'/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/.meta' Oct 5 06:11:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "format": "json"}]: dispatch Oct 5 06:11:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 58 KiB/s wr, 55 op/s Oct 5 06:11:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e232 e232: 6 total, 6 up, 6 in Oct 5 06:11:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:02 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/.meta.tmp' Oct 5 06:11:02 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/.meta.tmp' to config b'/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/.meta' Oct 5 06:11:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "format": "json"}]: dispatch Oct 5 06:11:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 53 KiB/s wr, 23 op/s Oct 5 06:11:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:04 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:04 localhost nova_compute[297130]: 2025-10-05 10:11:04.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:05 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b", "osd", "allow rw pool=manila_data namespace=fsvolumens_ff1566a0-bff7-4f22-929c-ced514b80ab6", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve49", "tenant_id": "1b54ab1a6a7d4eada4f4b298368a1f5e", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 46 KiB/s wr, 20 op/s Oct 5 06:11:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, tenant_id:1b54ab1a6a7d4eada4f4b298368a1f5e, vol_name:cephfs) < "" Oct 5 06:11:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Oct 5 06:11:05 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID eve49 with tenant 1b54ab1a6a7d4eada4f4b298368a1f5e Oct 5 06:11:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:05 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, tenant_id:1b54ab1a6a7d4eada4f4b298368a1f5e, vol_name:cephfs) < "" Oct 5 06:11:06 localhost nova_compute[297130]: 2025-10-05 10:11:06.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 5 06:11:06 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:06 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:06 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:06 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:11:06.294 271653 INFO neutron.agent.linux.ip_lib [None req-45ad8139-0f2c-4048-9222-802716837132 - - - - - -] Device tapa06a4263-4b cannot be used as it has no MAC address#033[00m Oct 5 06:11:06 localhost nova_compute[297130]: 2025-10-05 10:11:06.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost kernel: device tapa06a4263-4b entered promiscuous mode Oct 5 06:11:06 localhost NetworkManager[5970]: [1759659066.3282] manager: (tapa06a4263-4b): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Oct 5 06:11:06 localhost nova_compute[297130]: 2025-10-05 10:11:06.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost ovn_controller[157556]: 2025-10-05T10:11:06Z|00194|binding|INFO|Claiming lport a06a4263-4b51-4d79-9505-b27a7155fb25 for this chassis. Oct 5 06:11:06 localhost ovn_controller[157556]: 2025-10-05T10:11:06Z|00195|binding|INFO|a06a4263-4b51-4d79-9505-b27a7155fb25: Claiming unknown Oct 5 06:11:06 localhost systemd-udevd[337231]: Network interface NamePolicy= disabled on kernel command line. Oct 5 06:11:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:06.348 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-379e2661-a9f5-4f45-9005-b831c4749361', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379e2661-a9f5-4f45-9005-b831c4749361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96f9484619534895a7ead96d38119886', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7a03c2c-d0f1-432f-afc6-57c4b6bf4162, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a06a4263-4b51-4d79-9505-b27a7155fb25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:06.352 163201 INFO neutron.agent.ovn.metadata.agent [-] Port a06a4263-4b51-4d79-9505-b27a7155fb25 in datapath 379e2661-a9f5-4f45-9005-b831c4749361 bound to our chassis#033[00m Oct 5 06:11:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:06.356 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Port 02b1b002-52d6-4cfb-b8ba-3b8c652c137b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Oct 5 06:11:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:06.356 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 379e2661-a9f5-4f45-9005-b831c4749361, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:06.357 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[49a04a4c-dde8-4666-94f3-e6784902a9ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost ovn_controller[157556]: 2025-10-05T10:11:06Z|00196|binding|INFO|Setting lport a06a4263-4b51-4d79-9505-b27a7155fb25 ovn-installed in OVS Oct 5 06:11:06 localhost ovn_controller[157556]: 2025-10-05T10:11:06Z|00197|binding|INFO|Setting lport a06a4263-4b51-4d79-9505-b27a7155fb25 up in Southbound Oct 5 06:11:06 localhost nova_compute[297130]: 2025-10-05 10:11:06.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost journal[237639]: ethtool ioctl error on tapa06a4263-4b: No such device Oct 5 06:11:06 localhost nova_compute[297130]: 2025-10-05 10:11:06.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost nova_compute[297130]: 2025-10-05 10:11:06.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b73cf753-77d2-4e24-96f9-657ba5664103", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b73cf753-77d2-4e24-96f9-657ba5664103, vol_name:cephfs) < "" Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b73cf753-77d2-4e24-96f9-657ba5664103/.meta.tmp' Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b73cf753-77d2-4e24-96f9-657ba5664103/.meta.tmp' to config b'/volumes/_nogroup/b73cf753-77d2-4e24-96f9-657ba5664103/.meta' Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b73cf753-77d2-4e24-96f9-657ba5664103, vol_name:cephfs) < "" Oct 5 06:11:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b73cf753-77d2-4e24-96f9-657ba5664103", "format": "json"}]: dispatch Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b73cf753-77d2-4e24-96f9-657ba5664103, vol_name:cephfs) < "" Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b73cf753-77d2-4e24-96f9-657ba5664103, vol_name:cephfs) < "" Oct 5 06:11:07 localhost podman[337303]: Oct 5 06:11:07 localhost podman[337303]: 2025-10-05 10:11:07.326640545 +0000 UTC m=+0.085916961 container create a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Oct 5 06:11:07 localhost systemd[1]: Started libpod-conmon-a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166.scope. Oct 5 06:11:07 localhost systemd[1]: tmp-crun.XUpJrL.mount: Deactivated successfully. Oct 5 06:11:07 localhost podman[337303]: 2025-10-05 10:11:07.287172825 +0000 UTC m=+0.046449261 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Oct 5 06:11:07 localhost systemd[1]: Started libcrun container. Oct 5 06:11:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdb5d142d8318066f0dd8c55f1cefc1e8efe726e19a4037b90c093a68a63439/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Oct 5 06:11:07 localhost podman[337303]: 2025-10-05 10:11:07.418366792 +0000 UTC m=+0.177643198 container init a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:07 localhost podman[337303]: 2025-10-05 10:11:07.427827448 +0000 UTC m=+0.187103884 container start a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:11:07 localhost dnsmasq[337322]: started, version 2.85 cachesize 150 Oct 5 06:11:07 localhost dnsmasq[337322]: DNS service limited to local subnets Oct 5 06:11:07 localhost dnsmasq[337322]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Oct 5 06:11:07 localhost dnsmasq[337322]: warning: no upstream servers configured Oct 5 06:11:07 localhost dnsmasq-dhcp[337322]: DHCP, static leases only on 10.100.0.0, lease time 1d Oct 5 06:11:07 localhost dnsmasq[337322]: read /var/lib/neutron/dhcp/379e2661-a9f5-4f45-9005-b831c4749361/addn_hosts - 0 addresses Oct 5 06:11:07 localhost dnsmasq-dhcp[337322]: read /var/lib/neutron/dhcp/379e2661-a9f5-4f45-9005-b831c4749361/host Oct 5 06:11:07 localhost dnsmasq-dhcp[337322]: read /var/lib/neutron/dhcp/379e2661-a9f5-4f45-9005-b831c4749361/opts Oct 5 06:11:07 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:11:07.527 271653 INFO neutron.agent.dhcp.agent [None req-be9ca2f2-361d-43f1-8bf8-6120a5ac016f - - - - - -] DHCP configuration for ports {'0b90fbb5-b8d4-4522-bc73-bdfdae97cc4f'} is completed#033[00m Oct 5 06:11:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 52 KiB/s wr, 5 op/s Oct 5 06:11:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:07 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:07 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b Oct 5 06:11:07 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6/44d72d19-1832-4050-bb06-a94ec596056b],prefix=session evict} (starting...) Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "format": "json"}]: dispatch Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:08.035+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1566a0-bff7-4f22-929c-ced514b80ab6' of type subvolume Oct 5 06:11:08 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1566a0-bff7-4f22-929c-ced514b80ab6' of type subvolume Oct 5 06:11:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1566a0-bff7-4f22-929c-ced514b80ab6", "force": true, "format": "json"}]: dispatch Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff1566a0-bff7-4f22-929c-ced514b80ab6'' moved to trashcan Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1566a0-bff7-4f22-929c-ced514b80ab6, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e0b72043-3816-415a-800c-4d2edcbb1a5e", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e0b72043-3816-415a-800c-4d2edcbb1a5e/.meta.tmp' Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e0b72043-3816-415a-800c-4d2edcbb1a5e/.meta.tmp' to config b'/volumes/_nogroup/e0b72043-3816-415a-800c-4d2edcbb1a5e/.meta' Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e0b72043-3816-415a-800c-4d2edcbb1a5e", "format": "json"}]: dispatch Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, vol_name:cephfs) < "" Oct 5 06:11:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve48", "tenant_id": "1b54ab1a6a7d4eada4f4b298368a1f5e", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, tenant_id:1b54ab1a6a7d4eada4f4b298368a1f5e, vol_name:cephfs) < "" Oct 5 06:11:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Oct 5 06:11:09 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID eve48 with tenant 1b54ab1a6a7d4eada4f4b298368a1f5e Oct 5 06:11:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:09 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, tenant_id:1b54ab1a6a7d4eada4f4b298368a1f5e, vol_name:cephfs) < "" Oct 5 06:11:09 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:09 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 52 KiB/s wr, 5 op/s Oct 5 06:11:09 localhost nova_compute[297130]: 2025-10-05 10:11:09.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/.meta.tmp' Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/.meta.tmp' to config b'/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/.meta' Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "format": "json"}]: dispatch Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1b826907-b175-477e-9776-27f573590dbb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:11:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/.meta.tmp' Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/.meta.tmp' to config b'/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/.meta' Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1b826907-b175-477e-9776-27f573590dbb", "format": "json"}]: dispatch Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:10 localhost podman[337324]: 2025-10-05 10:11:10.929514697 +0000 UTC m=+0.098920753 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute) Oct 5 06:11:10 localhost podman[337325]: 2025-10-05 10:11:10.976481121 +0000 UTC m=+0.142463914 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:11:10 localhost podman[337325]: 2025-10-05 10:11:10.985240098 +0000 UTC m=+0.151222911 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:11:10 localhost podman[337324]: 2025-10-05 10:11:10.99418005 +0000 UTC m=+0.163586106 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 06:11:10 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:11:11 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:11:11 localhost nova_compute[297130]: 2025-10-05 10:11:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:11:11 Oct 5 06:11:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:11:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:11:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_data', 'images', 'manila_metadata', 'volumes', 'vms', 'backups', '.mgr'] Oct 5 06:11:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:11:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e9f0851d-f1ea-4a9f-a19d-c30687da60d1", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, vol_name:cephfs) < "" Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e9f0851d-f1ea-4a9f-a19d-c30687da60d1/.meta.tmp' Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e9f0851d-f1ea-4a9f-a19d-c30687da60d1/.meta.tmp' to config b'/volumes/_nogroup/e9f0851d-f1ea-4a9f-a19d-c30687da60d1/.meta' Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, vol_name:cephfs) < "" Oct 5 06:11:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e9f0851d-f1ea-4a9f-a19d-c30687da60d1", "format": "json"}]: dispatch Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, vol_name:cephfs) < "" Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, vol_name:cephfs) < "" Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:11:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:11:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 52 KiB/s wr, 5 op/s Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.1810441094360693e-06 of space, bias 1.0, pg target 0.00043402777777777775 quantized to 32 (current 32) Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:11:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00023800643844221106 of space, bias 4.0, pg target 0.189453125 quantized to 16 (current 16) Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:11:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:11:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b73cf753-77d2-4e24-96f9-657ba5664103", "format": "json"}]: dispatch Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b73cf753-77d2-4e24-96f9-657ba5664103, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b73cf753-77d2-4e24-96f9-657ba5664103, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:12.071+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b73cf753-77d2-4e24-96f9-657ba5664103' of type subvolume Oct 5 06:11:12 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b73cf753-77d2-4e24-96f9-657ba5664103' of type subvolume Oct 5 06:11:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b73cf753-77d2-4e24-96f9-657ba5664103", "force": true, "format": "json"}]: dispatch Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b73cf753-77d2-4e24-96f9-657ba5664103, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b73cf753-77d2-4e24-96f9-657ba5664103'' moved to trashcan Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b73cf753-77d2-4e24-96f9-657ba5664103, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve48", "format": "json"}]: dispatch Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Oct 5 06:11:12 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 5 06:11:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Oct 5 06:11:12 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve48", "format": "json"}]: dispatch Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6 Oct 5 06:11:12 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6],prefix=session evict} (starting...) Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:11:12 localhost podman[337369]: 2025-10-05 10:11:12.908966975 +0000 UTC m=+0.078284703 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:11:12 localhost podman[337369]: 2025-10-05 10:11:12.94602827 +0000 UTC m=+0.115346008 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Oct 5 06:11:12 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:11:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:11:13 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:13 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:13 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 344 B/s rd, 112 KiB/s wr, 12 op/s Oct 5 06:11:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1b826907-b175-477e-9776-27f573590dbb", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:1b826907-b175-477e-9776-27f573590dbb, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:1b826907-b175-477e-9776-27f573590dbb, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:14 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:14 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_1b826907-b175-477e-9776-27f573590dbb", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:14 localhost dnsmasq[337322]: exiting on receipt of SIGTERM Oct 5 06:11:14 localhost podman[337408]: 2025-10-05 10:11:14.497112724 +0000 UTC m=+0.064690255 container kill a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:11:14 localhost systemd[1]: libpod-a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166.scope: Deactivated successfully. Oct 5 06:11:14 localhost podman[337420]: 2025-10-05 10:11:14.571266184 +0000 UTC m=+0.057542360 container died a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166-userdata-shm.mount: Deactivated successfully. Oct 5 06:11:14 localhost podman[337420]: 2025-10-05 10:11:14.598865383 +0000 UTC m=+0.085141499 container cleanup a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:11:14 localhost systemd[1]: libpod-conmon-a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166.scope: Deactivated successfully. Oct 5 06:11:14 localhost ovn_controller[157556]: 2025-10-05T10:11:14Z|00198|binding|INFO|Removing iface tapa06a4263-4b ovn-installed in OVS Oct 5 06:11:14 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:14.648 163201 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 02b1b002-52d6-4cfb-b8ba-3b8c652c137b with type ""#033[00m Oct 5 06:11:14 localhost ovn_controller[157556]: 2025-10-05T10:11:14Z|00199|binding|INFO|Removing lport a06a4263-4b51-4d79-9505-b27a7155fb25 ovn-installed in OVS Oct 5 06:11:14 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:14.650 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005471152.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp510ad4b7-e6ed-5555-86c8-64837d639563-379e2661-a9f5-4f45-9005-b831c4749361', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379e2661-a9f5-4f45-9005-b831c4749361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96f9484619534895a7ead96d38119886', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005471152.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7a03c2c-d0f1-432f-afc6-57c4b6bf4162, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a06a4263-4b51-4d79-9505-b27a7155fb25) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:14 localhost nova_compute[297130]: 2025-10-05 10:11:14.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:14 localhost podman[337422]: 2025-10-05 10:11:14.654690886 +0000 UTC m=+0.134857997 container remove a724b21f75cb2310b13a7a24ecf9454ae8863212bcd10e24e779575c35e13166 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-379e2661-a9f5-4f45-9005-b831c4749361, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Oct 5 06:11:14 localhost nova_compute[297130]: 2025-10-05 10:11:14.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:14 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:14.656 163201 INFO neutron.agent.ovn.metadata.agent [-] Port a06a4263-4b51-4d79-9505-b27a7155fb25 in datapath 379e2661-a9f5-4f45-9005-b831c4749361 unbound from our chassis#033[00m Oct 5 06:11:14 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:14.658 163201 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 379e2661-a9f5-4f45-9005-b831c4749361, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Oct 5 06:11:14 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:14.660 271895 DEBUG oslo.privsep.daemon [-] privsep: reply[21066399-1364-41bb-8e53-8a39a3318b7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Oct 5 06:11:14 localhost nova_compute[297130]: 2025-10-05 10:11:14.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:14 localhost kernel: device tapa06a4263-4b left promiscuous mode Oct 5 06:11:14 localhost nova_compute[297130]: 2025-10-05 10:11:14.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:11:14.725 271653 INFO neutron.agent.dhcp.agent [None req-e3663b6d-b4ab-49a5-bf98-3cd0ef0858af - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:11:14 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:11:14.888 271653 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Oct 5 06:11:14 localhost nova_compute[297130]: 2025-10-05 10:11:14.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e0b72043-3816-415a-800c-4d2edcbb1a5e", "format": "json"}]: dispatch Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:15.021+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e0b72043-3816-415a-800c-4d2edcbb1a5e' of type subvolume Oct 5 06:11:15 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e0b72043-3816-415a-800c-4d2edcbb1a5e' of type subvolume Oct 5 06:11:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e0b72043-3816-415a-800c-4d2edcbb1a5e", "force": true, "format": "json"}]: dispatch Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, vol_name:cephfs) < "" Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e0b72043-3816-415a-800c-4d2edcbb1a5e'' moved to trashcan Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e0b72043-3816-415a-800c-4d2edcbb1a5e, vol_name:cephfs) < "" Oct 5 06:11:15 localhost nova_compute[297130]: 2025-10-05 10:11:15.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:15 localhost systemd[1]: var-lib-containers-storage-overlay-dbdb5d142d8318066f0dd8c55f1cefc1e8efe726e19a4037b90c093a68a63439-merged.mount: Deactivated successfully. Oct 5 06:11:15 localhost systemd[1]: run-netns-qdhcp\x2d379e2661\x2da9f5\x2d4f45\x2d9005\x2db831c4749361.mount: Deactivated successfully. Oct 5 06:11:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve47", "tenant_id": "1b54ab1a6a7d4eada4f4b298368a1f5e", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, tenant_id:1b54ab1a6a7d4eada4f4b298368a1f5e, vol_name:cephfs) < "" Oct 5 06:11:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Oct 5 06:11:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 5 06:11:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID eve47 with tenant 1b54ab1a6a7d4eada4f4b298368a1f5e Oct 5 06:11:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 92 KiB/s wr, 10 op/s Oct 5 06:11:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:15 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, tenant_id:1b54ab1a6a7d4eada4f4b298368a1f5e, vol_name:cephfs) < "" Oct 5 06:11:16 localhost nova_compute[297130]: 2025-10-05 10:11:16.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:16 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6", "osd", "allow rw pool=manila_data namespace=fsvolumens_d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:16 localhost openstack_network_exporter[250246]: ERROR 10:11:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:16 localhost openstack_network_exporter[250246]: ERROR 10:11:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:16 localhost openstack_network_exporter[250246]: ERROR 10:11:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:11:16 localhost openstack_network_exporter[250246]: ERROR 10:11:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:11:16 localhost openstack_network_exporter[250246]: Oct 5 06:11:16 localhost openstack_network_exporter[250246]: ERROR 10:11:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:11:16 localhost openstack_network_exporter[250246]: Oct 5 06:11:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:11:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:11:16 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:16 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:11:16 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:11:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:16 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:16 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:16 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:17 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:17 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:17 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1b826907-b175-477e-9776-27f573590dbb", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:17 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:17 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1b826907-b175-477e-9776-27f573590dbb", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0 Oct 5 06:11:17 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb/3fdb4666-290e-4dfc-850d-ed9553f703c0],prefix=session evict} (starting...) Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 152 KiB/s wr, 17 op/s Oct 5 06:11:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1b826907-b175-477e-9776-27f573590dbb", "format": "json"}]: dispatch Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1b826907-b175-477e-9776-27f573590dbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1b826907-b175-477e-9776-27f573590dbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:17.826+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1b826907-b175-477e-9776-27f573590dbb' of type subvolume Oct 5 06:11:17 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1b826907-b175-477e-9776-27f573590dbb' of type subvolume Oct 5 06:11:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1b826907-b175-477e-9776-27f573590dbb", "force": true, "format": "json"}]: dispatch Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1b826907-b175-477e-9776-27f573590dbb'' moved to trashcan Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1b826907-b175-477e-9776-27f573590dbb, vol_name:cephfs) < "" Oct 5 06:11:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e9f0851d-f1ea-4a9f-a19d-c30687da60d1", "format": "json"}]: dispatch Oct 5 06:11:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:18.213+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e9f0851d-f1ea-4a9f-a19d-c30687da60d1' of type subvolume Oct 5 06:11:18 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e9f0851d-f1ea-4a9f-a19d-c30687da60d1' of type subvolume Oct 5 06:11:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e9f0851d-f1ea-4a9f-a19d-c30687da60d1", "force": true, "format": "json"}]: dispatch Oct 5 06:11:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, vol_name:cephfs) < "" Oct 5 06:11:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e9f0851d-f1ea-4a9f-a19d-c30687da60d1'' moved to trashcan Oct 5 06:11:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e9f0851d-f1ea-4a9f-a19d-c30687da60d1, vol_name:cephfs) < "" Oct 5 06:11:18 localhost nova_compute[297130]: 2025-10-05 10:11:18.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:18 localhost nova_compute[297130]: 2025-10-05 10:11:18.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:18 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:18 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:18 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:18 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:11:18 localhost podman[337451]: 2025-10-05 10:11:18.915799636 +0000 UTC m=+0.083268239 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent) Oct 5 06:11:18 localhost podman[337451]: 2025-10-05 10:11:18.923165926 +0000 UTC m=+0.090634579 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:11:18 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:11:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:19 localhost nova_compute[297130]: 2025-10-05 10:11:19.269 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve47", "format": "json"}]: dispatch Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:11:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/603546924' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:11:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:11:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/603546924' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:11:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Oct 5 06:11:19 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 5 06:11:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Oct 5 06:11:19 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve47", "format": "json"}]: dispatch Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6 Oct 5 06:11:19 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6],prefix=session evict} (starting...) Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 127 KiB/s wr, 14 op/s Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta.tmp' Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta.tmp' to config b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta' Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "format": "json"}]: dispatch Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:19 localhost nova_compute[297130]: 2025-10-05 10:11:19.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:20 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:11:20 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:20 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:11:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:11:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:20 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:20 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/.meta.tmp' Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/.meta.tmp' to config b'/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/.meta' Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:20 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "format": "json"}]: dispatch Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.318 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.318 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.318 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.319 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.319 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:11:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:11:21 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2771894729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.782 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:11:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 127 KiB/s wr, 14 op/s Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.988 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.990 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11506MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.990 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:11:21 localhost nova_compute[297130]: 2025-10-05 10:11:21.991 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.333 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.333 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.351 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:11:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:11:22 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3422835594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.771 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.781 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.799 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.801 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:11:22 localhost nova_compute[297130]: 2025-10-05 10:11:22.802 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:11:22 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "snap_name": "7e49f428-d466-4427-846e-de1163b81c39", "format": "json"}]: dispatch Oct 5 06:11:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e49f428-d466-4427-846e-de1163b81c39, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7e49f428-d466-4427-846e-de1163b81c39, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "61674ed4-cec8-486b-9874-4251fcb51b62", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:61674ed4-cec8-486b-9874-4251fcb51b62, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:61674ed4-cec8-486b-9874-4251fcb51b62, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:11:23 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:11:23 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:23 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve49", "format": "json"}]: dispatch Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 188 KiB/s wr, 23 op/s Oct 5 06:11:23 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:23 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:23 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:23 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.804 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.804 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.805 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.805 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:11:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Oct 5 06:11:23 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 5 06:11:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Oct 5 06:11:23 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.846 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.846 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:23 localhost nova_compute[297130]: 2025-10-05 10:11:23.847 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "auth_id": "eve49", "format": "json"}]: dispatch Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6 Oct 5 06:11:23 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e/3c69abb6-63db-41c3-acd6-fc4e060ecbb6],prefix=session evict} (starting...) Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:23 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "format": "json"}]: dispatch Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:24.129+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd0ab7324-adf1-419e-90e0-250fc5ef9c2e' of type subvolume Oct 5 06:11:24 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd0ab7324-adf1-419e-90e0-250fc5ef9c2e' of type subvolume Oct 5 06:11:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d0ab7324-adf1-419e-90e0-250fc5ef9c2e", "force": true, "format": "json"}]: dispatch Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d0ab7324-adf1-419e-90e0-250fc5ef9c2e'' moved to trashcan Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d0ab7324-adf1-419e-90e0-250fc5ef9c2e, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:24 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:24 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:24 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362", "osd", "allow rw pool=manila_data namespace=fsvolumens_75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:24 localhost nova_compute[297130]: 2025-10-05 10:11:24.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 121 KiB/s wr, 15 op/s Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.836219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085836295, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2027, "num_deletes": 268, "total_data_size": 2651309, "memory_usage": 2699424, "flush_reason": "Manual Compaction"} Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Oct 5 06:11:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e233 e233: 6 total, 6 up, 6 in Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085847498, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1732544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26550, "largest_seqno": 28572, "table_properties": {"data_size": 1724102, "index_size": 4951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 22012, "raw_average_key_size": 22, "raw_value_size": 1705770, "raw_average_value_size": 1767, "num_data_blocks": 208, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659006, "oldest_key_time": 1759659006, "file_creation_time": 1759659085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 11324 microseconds, and 5334 cpu microseconds. Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.847550) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1732544 bytes OK Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.847573) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.849832) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.849858) EVENT_LOG_v1 {"time_micros": 1759659085849852, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.849881) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2641328, prev total WAL file size 2641369, number of live WAL files 2. Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.850682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1691KB)], [42(15MB)] Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085850740, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 18489332, "oldest_snapshot_seqno": -1} Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 13556 keys, 17192994 bytes, temperature: kUnknown Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659085999052, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 17192994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17115232, "index_size": 42781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33925, "raw_key_size": 365278, "raw_average_key_size": 26, "raw_value_size": 16884165, "raw_average_value_size": 1245, "num_data_blocks": 1587, "num_entries": 13556, "num_filter_entries": 13556, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759659085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Oct 5 06:11:25 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.999405) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 17192994 bytes Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.001112) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.6 rd, 115.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 16.0 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(20.6) write-amplify(9.9) OK, records in: 14109, records dropped: 553 output_compression: NoCompression Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.001142) EVENT_LOG_v1 {"time_micros": 1759659086001129, "job": 24, "event": "compaction_finished", "compaction_time_micros": 148416, "compaction_time_cpu_micros": 51405, "output_level": 6, "num_output_files": 1, "total_output_size": 17192994, "num_input_records": 14109, "num_output_records": 13556, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659086001481, "job": 24, "event": "table_file_deletion", "file_number": 44} Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659086003792, "job": 24, "event": "table_file_deletion", "file_number": 42} Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:25.850568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.003883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.003889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.003892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.003895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:11:26.003898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:11:26 localhost podman[248157]: time="2025-10-05T10:11:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:11:26 localhost podman[248157]: @ - - [05/Oct/2025:10:11:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:11:26 localhost podman[248157]: @ - - [05/Oct/2025:10:11:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19364 "" "Go-http-client/1.1" Oct 5 06:11:26 localhost nova_compute[297130]: 2025-10-05 10:11:26.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:11:26 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:26 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:11:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:11:26 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:26 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:26 localhost systemd[1]: tmp-crun.T95tmO.mount: Deactivated successfully. Oct 5 06:11:26 localhost podman[337518]: 2025-10-05 10:11:26.938293736 +0000 UTC m=+0.095860671 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:11:26 localhost systemd[1]: tmp-crun.4nFCTQ.mount: Deactivated successfully. Oct 5 06:11:26 localhost podman[337517]: 2025-10-05 10:11:26.98457486 +0000 UTC m=+0.146938184 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:11:27 localhost podman[337518]: 2025-10-05 10:11:27.001218522 +0000 UTC m=+0.158785427 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:11:27 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:11:27 localhost podman[337517]: 2025-10-05 10:11:27.024193995 +0000 UTC m=+0.186557309 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:11:27 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "snap_name": "7e49f428-d466-4427-846e-de1163b81c39_7630f5f1-60cc-4e6c-aba0-a5eb5f08ea66", "force": true, "format": "json"}]: dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e49f428-d466-4427-846e-de1163b81c39_7630f5f1-60cc-4e6c-aba0-a5eb5f08ea66, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta.tmp' Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta.tmp' to config b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta' Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e49f428-d466-4427-846e-de1163b81c39_7630f5f1-60cc-4e6c-aba0-a5eb5f08ea66, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:27 localhost nova_compute[297130]: 2025-10-05 10:11:27.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "snap_name": "7e49f428-d466-4427-846e-de1163b81c39", "force": true, "format": "json"}]: dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e49f428-d466-4427-846e-de1163b81c39, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta.tmp' Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta.tmp' to config b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72/.meta' Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7e49f428-d466-4427-846e-de1163b81c39, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:27 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:27 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362 Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 154 KiB/s wr, 38 op/s Oct 5 06:11:27 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e/8a5c9047-d7d5-4b4d-a9b3-763190279362],prefix=session evict} (starting...) Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:27 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "format": "json"}]: dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e234 e234: 6 total, 6 up, 6 in Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:27.924+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75c3b8eb-36c6-4611-a14b-a89baffc9f0e' of type subvolume Oct 5 06:11:27 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75c3b8eb-36c6-4611-a14b-a89baffc9f0e' of type subvolume Oct 5 06:11:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75c3b8eb-36c6-4611-a14b-a89baffc9f0e", "force": true, "format": "json"}]: dispatch Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/75c3b8eb-36c6-4611-a14b-a89baffc9f0e'' moved to trashcan Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75c3b8eb-36c6-4611-a14b-a89baffc9f0e, vol_name:cephfs) < "" Oct 5 06:11:28 localhost nova_compute[297130]: 2025-10-05 10:11:28.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:11:28 localhost nova_compute[297130]: 2025-10-05 10:11:28.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:11:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e235 e235: 6 total, 6 up, 6 in Oct 5 06:11:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:11:29 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1722453491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:11:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:11:29 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1722453491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:11:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 135 KiB/s wr, 47 op/s Oct 5 06:11:29 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "61674ed4-cec8-486b-9874-4251fcb51b62", "snap_name": "6dbe2134-c49e-4ec1-92d5-9f7fecf77c5f", "force": true, "format": "json"}]: dispatch Oct 5 06:11:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:61674ed4-cec8-486b-9874-4251fcb51b62, prefix:fs subvolumegroup snapshot rm, snap_name:6dbe2134-c49e-4ec1-92d5-9f7fecf77c5f, vol_name:cephfs) < "" Oct 5 06:11:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:61674ed4-cec8-486b-9874-4251fcb51b62, prefix:fs subvolumegroup snapshot rm, snap_name:6dbe2134-c49e-4ec1-92d5-9f7fecf77c5f, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8574b65-5dbc-4037-af0e-5a3e47ec1613", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, vol_name:cephfs) < "" Oct 5 06:11:30 localhost nova_compute[297130]: 2025-10-05 10:11:30.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8574b65-5dbc-4037-af0e-5a3e47ec1613/.meta.tmp' Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8574b65-5dbc-4037-af0e-5a3e47ec1613/.meta.tmp' to config b'/volumes/_nogroup/b8574b65-5dbc-4037-af0e-5a3e47ec1613/.meta' Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8574b65-5dbc-4037-af0e-5a3e47ec1613", "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:11:30 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:11:30 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:30 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:30.556+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bb6ff81-c80d-4db3-8234-64a66d77cd72' of type subvolume Oct 5 06:11:30 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bb6ff81-c80d-4db3-8234-64a66d77cd72' of type subvolume Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bb6ff81-c80d-4db3-8234-64a66d77cd72", "force": true, "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8bb6ff81-c80d-4db3-8234-64a66d77cd72'' moved to trashcan Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bb6ff81-c80d-4db3-8234-64a66d77cd72, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:30 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:30 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:11:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:11:30 localhost systemd[1]: tmp-crun.gw4KbA.mount: Deactivated successfully. Oct 5 06:11:30 localhost podman[337565]: 2025-10-05 10:11:30.906961527 +0000 UTC m=+0.069244698 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:30 localhost podman[337564]: 2025-10-05 10:11:30.95722057 +0000 UTC m=+0.119128191 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 06:11:30 localhost podman[337564]: 2025-10-05 10:11:30.966225065 +0000 UTC m=+0.128132726 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:11:30 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:30 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:30 localhost podman[337565]: 2025-10-05 10:11:30.98526743 +0000 UTC m=+0.147550571 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Oct 5 06:11:30 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:11:31 localhost nova_compute[297130]: 2025-10-05 10:11:31.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 135 KiB/s wr, 47 op/s Oct 5 06:11:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "61674ed4-cec8-486b-9874-4251fcb51b62", "force": true, "format": "json"}]: dispatch Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:61674ed4-cec8-486b-9874-4251fcb51b62, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:61674ed4-cec8-486b-9874-4251fcb51b62, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d85f292-2785-4813-b1fe-75bba74574f1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d85f292-2785-4813-b1fe-75bba74574f1, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d85f292-2785-4813-b1fe-75bba74574f1/.meta.tmp' Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d85f292-2785-4813-b1fe-75bba74574f1/.meta.tmp' to config b'/volumes/_nogroup/8d85f292-2785-4813-b1fe-75bba74574f1/.meta' Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d85f292-2785-4813-b1fe-75bba74574f1, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d85f292-2785-4813-b1fe-75bba74574f1", "format": "json"}]: dispatch Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d85f292-2785-4813-b1fe-75bba74574f1, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d85f292-2785-4813-b1fe-75bba74574f1, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:11:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:33 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:33 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:33 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 187 KiB/s wr, 67 op/s Oct 5 06:11:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/.meta.tmp' Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/.meta.tmp' to config b'/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/.meta' Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "format": "json"}]: dispatch Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:34 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:34 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4 Oct 5 06:11:34 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4],prefix=session evict} (starting...) Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:35 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:35 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:35 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:35 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:35 localhost nova_compute[297130]: 2025-10-05 10:11:35.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:35.225 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:11:35 localhost nova_compute[297130]: 2025-10-05 10:11:35.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:35 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:35.226 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:11:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 85 KiB/s wr, 31 op/s Oct 5 06:11:36 localhost nova_compute[297130]: 2025-10-05 10:11:36.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:11:36 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:11:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:11:36 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:11:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:11:36 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 92cd45e8-8109-4416-8a29-fb2b4643d93a (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:11:36 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 92cd45e8-8109-4416-8a29-fb2b4643d93a (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:11:36 localhost ceph-mgr[301363]: [progress INFO root] Completed event 92cd45e8-8109-4416-8a29-fb2b4643d93a (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:11:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:11:36 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:11:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 e236: 6 total, 6 up, 6 in Oct 5 06:11:36 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:11:36 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:11:37 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:11:37 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:11:37 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/.meta.tmp' Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/.meta.tmp' to config b'/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/.meta' Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:11:37 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:11:37 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:37 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/.meta.tmp' Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/.meta.tmp' to config b'/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/.meta' Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 162 KiB/s wr, 38 op/s Oct 5 06:11:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d85f292-2785-4813-b1fe-75bba74574f1", "format": "json"}]: dispatch Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d85f292-2785-4813-b1fe-75bba74574f1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d85f292-2785-4813-b1fe-75bba74574f1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:37 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d85f292-2785-4813-b1fe-75bba74574f1' of type subvolume Oct 5 06:11:37 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:37.990+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d85f292-2785-4813-b1fe-75bba74574f1' of type subvolume Oct 5 06:11:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d85f292-2785-4813-b1fe-75bba74574f1", "force": true, "format": "json"}]: dispatch Oct 5 06:11:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d85f292-2785-4813-b1fe-75bba74574f1, vol_name:cephfs) < "" Oct 5 06:11:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d85f292-2785-4813-b1fe-75bba74574f1'' moved to trashcan Oct 5 06:11:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d85f292-2785-4813-b1fe-75bba74574f1, vol_name:cephfs) < "" Oct 5 06:11:38 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:11:38 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:38 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:11:38 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:11:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:38 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:38 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:38 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:39 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:39 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:39 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:39 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 144 KiB/s wr, 33 op/s Oct 5 06:11:40 localhost nova_compute[297130]: 2025-10-05 10:11:40.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "auth_id": "Joe", "tenant_id": "aa3c1cdcd58e40eab93d64ef314bf089", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, tenant_id:aa3c1cdcd58e40eab93d64ef314bf089, vol_name:cephfs) < "" Oct 5 06:11:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Oct 5 06:11:40 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:11:40 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID Joe with tenant aa3c1cdcd58e40eab93d64ef314bf089 Oct 5 06:11:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:40 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, tenant_id:aa3c1cdcd58e40eab93d64ef314bf089, vol_name:cephfs) < "" Oct 5 06:11:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:11:40 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:40 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:40 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba", "osd", "allow rw pool=manila_data namespace=fsvolumens_6c3ed1b3-843c-400f-bb43-1c34240712dc", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "auth_id": "tempest-cephx-id-425502484", "tenant_id": "b29ecc0411824b33a3bc8ad3d9673df3", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-425502484, format:json, prefix:fs subvolume authorize, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, tenant_id:b29ecc0411824b33a3bc8ad3d9673df3, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} v 0) Oct 5 06:11:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-425502484 with tenant b29ecc0411824b33a3bc8ad3d9673df3 Oct 5 06:11:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:41 localhost nova_compute[297130]: 2025-10-05 10:11:41.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-425502484, format:json, prefix:fs subvolume authorize, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, tenant_id:b29ecc0411824b33a3bc8ad3d9673df3, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8950e1c8-7d94-48c5-9d72-52b5999f38bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8950e1c8-7d94-48c5-9d72-52b5999f38bd/.meta.tmp' Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8950e1c8-7d94-48c5-9d72-52b5999f38bd/.meta.tmp' to config b'/volumes/_nogroup/8950e1c8-7d94-48c5-9d72-52b5999f38bd/.meta' Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8950e1c8-7d94-48c5-9d72-52b5999f38bd", "format": "json"}]: dispatch Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 144 KiB/s wr, 33 op/s Oct 5 06:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:11:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:11:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:41 localhost systemd[1]: tmp-crun.GSmni7.mount: Deactivated successfully. Oct 5 06:11:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4 Oct 5 06:11:41 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4],prefix=session evict} (starting...) Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:41 localhost podman[337693]: 2025-10-05 10:11:41.94729947 +0000 UTC m=+0.102771787 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Oct 5 06:11:41 localhost podman[337693]: 2025-10-05 10:11:41.987276755 +0000 UTC m=+0.142749072 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute) Oct 5 06:11:42 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:11:42 localhost podman[337694]: 2025-10-05 10:11:42.034034972 +0000 UTC m=+0.188116192 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:11:42 localhost podman[337694]: 2025-10-05 10:11:42.073175083 +0000 UTC m=+0.227256333 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:11:42 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-425502484", "caps": ["mds", "allow rw path=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11", "osd", "allow rw pool=manila_data namespace=fsvolumens_2225aa83-67d3-4889-ba2c-ca0643f93e0b", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "auth_id": "tempest-cephx-id-425502484", "format": "json"}]: dispatch Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-425502484, format:json, prefix:fs subvolume deauthorize, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:42 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} v 0) Oct 5 06:11:42 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} : dispatch Oct 5 06:11:42 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} v 0) Oct 5 06:11:42 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} : dispatch Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-425502484, format:json, prefix:fs subvolume deauthorize, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "auth_id": "tempest-cephx-id-425502484", "format": "json"}]: dispatch Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-425502484, format:json, prefix:fs subvolume evict, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-425502484, client_metadata.root=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11 Oct 5 06:11:42 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-425502484,client_metadata.root=/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b/599e55be-3e27-4617-b6b4-fe0731ea2d11],prefix=session evict} (starting...) Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-425502484, format:json, prefix:fs subvolume evict, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:42 localhost sshd[337739]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:11:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "format": "json"}]: dispatch Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:42 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:42.620+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2225aa83-67d3-4889-ba2c-ca0643f93e0b' of type subvolume Oct 5 06:11:42 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2225aa83-67d3-4889-ba2c-ca0643f93e0b' of type subvolume Oct 5 06:11:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2225aa83-67d3-4889-ba2c-ca0643f93e0b", "force": true, "format": "json"}]: dispatch Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2225aa83-67d3-4889-ba2c-ca0643f93e0b'' moved to trashcan Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2225aa83-67d3-4889-ba2c-ca0643f93e0b, vol_name:cephfs) < "" Oct 5 06:11:43 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-425502484", "format": "json"} : dispatch Oct 5 06:11:43 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} : dispatch Oct 5 06:11:43 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"} : dispatch Oct 5 06:11:43 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-425502484"}]': finished Oct 5 06:11:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 186 KiB/s wr, 21 op/s Oct 5 06:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:11:43 localhost podman[337742]: 2025-10-05 10:11:43.921456215 +0000 UTC m=+0.089520438 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:11:43 localhost podman[337742]: 2025-10-05 10:11:43.938859006 +0000 UTC m=+0.106923199 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, version=9.6, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7) Oct 5 06:11:43 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:11:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:11:44 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:11:44 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:44 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:44 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:44 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:44 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/.meta.tmp' Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/.meta.tmp' to config b'/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/.meta' Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "format": "json"}]: dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:44 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:44 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:44 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:45 localhost nova_compute[297130]: 2025-10-05 10:11:45.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:45 localhost ovn_metadata_agent[163196]: 2025-10-05 10:11:45.229 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:11:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:11:45 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:45 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8950e1c8-7d94-48c5-9d72-52b5999f38bd", "format": "json"}]: dispatch Oct 5 06:11:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:45 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:45.271+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8950e1c8-7d94-48c5-9d72-52b5999f38bd' of type subvolume Oct 5 06:11:45 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8950e1c8-7d94-48c5-9d72-52b5999f38bd' of type subvolume Oct 5 06:11:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8950e1c8-7d94-48c5-9d72-52b5999f38bd", "force": true, "format": "json"}]: dispatch Oct 5 06:11:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, vol_name:cephfs) < "" Oct 5 06:11:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8950e1c8-7d94-48c5-9d72-52b5999f38bd'' moved to trashcan Oct 5 06:11:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8950e1c8-7d94-48c5-9d72-52b5999f38bd, vol_name:cephfs) < "" Oct 5 06:11:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 186 KiB/s wr, 21 op/s Oct 5 06:11:46 localhost nova_compute[297130]: 2025-10-05 10:11:46.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:46 localhost openstack_network_exporter[250246]: ERROR 10:11:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:11:46 localhost openstack_network_exporter[250246]: ERROR 10:11:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:46 localhost openstack_network_exporter[250246]: ERROR 10:11:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:11:46 localhost openstack_network_exporter[250246]: ERROR 10:11:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:11:46 localhost openstack_network_exporter[250246]: Oct 5 06:11:46 localhost openstack_network_exporter[250246]: ERROR 10:11:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:11:46 localhost openstack_network_exporter[250246]: Oct 5 06:11:46 localhost ovn_controller[157556]: 2025-10-05T10:11:46Z|00200|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory Oct 5 06:11:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:11:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:11:47 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 224 KiB/s wr, 26 op/s Oct 5 06:11:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:47 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "auth_id": "Joe", "tenant_id": "30d95dfaad874b1a95bc3775adb2dbc3", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, tenant_id:30d95dfaad874b1a95bc3775adb2dbc3, vol_name:cephfs) < "" Oct 5 06:11:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Oct 5 06:11:47 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:11:47 localhost ceph-mgr[301363]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Oct 5 06:11:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, tenant_id:30d95dfaad874b1a95bc3775adb2dbc3, vol_name:cephfs) < "" Oct 5 06:11:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:47.986+0000 7f417fc90640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Oct 5 06:11:47 localhost ceph-mgr[301363]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Oct 5 06:11:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:48 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:48 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:48 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:48 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4 Oct 5 06:11:48 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4],prefix=session evict} (starting...) Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7978baff-4c2d-4524-8c4a-2c2eaa5f60e7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7978baff-4c2d-4524-8c4a-2c2eaa5f60e7/.meta.tmp' Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7978baff-4c2d-4524-8c4a-2c2eaa5f60e7/.meta.tmp' to config b'/volumes/_nogroup/7978baff-4c2d-4524-8c4a-2c2eaa5f60e7/.meta' Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7978baff-4c2d-4524-8c4a-2c2eaa5f60e7", "format": "json"}]: dispatch Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, vol_name:cephfs) < "" Oct 5 06:11:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, vol_name:cephfs) < "" Oct 5 06:11:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:49 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:49 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:49 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:49 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:11:49 localhost sshd[337776]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:11:49 localhost podman[337765]: 2025-10-05 10:11:49.544864699 +0000 UTC m=+0.089954250 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:11:49 localhost podman[337765]: 2025-10-05 10:11:49.579280943 +0000 UTC m=+0.124370484 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:11:49 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:11:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 140 KiB/s wr, 16 op/s Oct 5 06:11:50 localhost nova_compute[297130]: 2025-10-05 10:11:50.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:50 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "auth_id": "tempest-cephx-id-1938692308", "tenant_id": "30d95dfaad874b1a95bc3775adb2dbc3", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1938692308, format:json, prefix:fs subvolume authorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, tenant_id:30d95dfaad874b1a95bc3775adb2dbc3, vol_name:cephfs) < "" Oct 5 06:11:50 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} v 0) Oct 5 06:11:50 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} : dispatch Oct 5 06:11:50 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1938692308 with tenant 30d95dfaad874b1a95bc3775adb2dbc3 Oct 5 06:11:50 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:50 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1938692308, format:json, prefix:fs subvolume authorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, tenant_id:30d95dfaad874b1a95bc3775adb2dbc3, vol_name:cephfs) < "" Oct 5 06:11:51 localhost nova_compute[297130]: 2025-10-05 10:11:51.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1938692308", "caps": ["mds", "allow rw path=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391", "osd", "allow rw pool=manila_data namespace=fsvolumens_d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:11:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:11:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:51 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "tenant_id": "3577c88a31454fdc9b3c8a7641247a9c", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID tempest-cephx-id-1758269602 with tenant 3577c88a31454fdc9b3c8a7641247a9c Oct 5 06:11:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume authorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, tenant_id:3577c88a31454fdc9b3c8a7641247a9c, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 140 KiB/s wr, 16 op/s Oct 5 06:11:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7978baff-4c2d-4524-8c4a-2c2eaa5f60e7", "format": "json"}]: dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:51.960+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7978baff-4c2d-4524-8c4a-2c2eaa5f60e7' of type subvolume Oct 5 06:11:51 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7978baff-4c2d-4524-8c4a-2c2eaa5f60e7' of type subvolume Oct 5 06:11:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7978baff-4c2d-4524-8c4a-2c2eaa5f60e7", "force": true, "format": "json"}]: dispatch Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, vol_name:cephfs) < "" Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7978baff-4c2d-4524-8c4a-2c2eaa5f60e7'' moved to trashcan Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7978baff-4c2d-4524-8c4a-2c2eaa5f60e7, vol_name:cephfs) < "" Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:52 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1758269602", "caps": ["mds", "allow rw path=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4", "osd", "allow rw pool=manila_data namespace=fsvolumens_0fda86af-a18e-4ae6-b70c-d418b3495977", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 222 KiB/s wr, 40 op/s Oct 5 06:11:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:11:54 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:54 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:11:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:11:54 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:11:54 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4129631628' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:11:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:11:54 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4129631628' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:11:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "auth_id": "Joe", "format": "json"}]: dispatch Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'd3049662-1f6a-47c4-9ebc-be7eeb68ea15' Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "auth_id": "Joe", "format": "json"}]: dispatch Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391 Oct 5 06:11:54 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391],prefix=session evict} (starting...) Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} v 0) Oct 5 06:11:55 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} v 0) Oct 5 06:11:55 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume deauthorize, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "auth_id": "tempest-cephx-id-1758269602", "format": "json"}]: dispatch Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1758269602, client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4 Oct 5 06:11:55 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1758269602,client_metadata.root=/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977/a3db8268-e4d3-4d4b-a1be-c0222f7196b4],prefix=session evict} (starting...) Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1758269602, format:json, prefix:fs subvolume evict, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:55 localhost nova_compute[297130]: 2025-10-05 10:11:55.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7cbb8776-0b38-4328-9be6-36f3883bb770", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7cbb8776-0b38-4328-9be6-36f3883bb770, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7cbb8776-0b38-4328-9be6-36f3883bb770/.meta.tmp' Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cbb8776-0b38-4328-9be6-36f3883bb770/.meta.tmp' to config b'/volumes/_nogroup/7cbb8776-0b38-4328-9be6-36f3883bb770/.meta' Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7cbb8776-0b38-4328-9be6-36f3883bb770, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7cbb8776-0b38-4328-9be6-36f3883bb770", "format": "json"}]: dispatch Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cbb8776-0b38-4328-9be6-36f3883bb770, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cbb8776-0b38-4328-9be6-36f3883bb770, vol_name:cephfs) < "" Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1758269602", "format": "json"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"} : dispatch Oct 5 06:11:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1758269602"}]': finished Oct 5 06:11:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 130 KiB/s wr, 30 op/s Oct 5 06:11:56 localhost podman[248157]: time="2025-10-05T10:11:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:11:56 localhost podman[248157]: @ - - [05/Oct/2025:10:11:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:11:56 localhost podman[248157]: @ - - [05/Oct/2025:10:11:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19364 "" "Go-http-client/1.1" Oct 5 06:11:56 localhost nova_compute[297130]: 2025-10-05 10:11:56.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:11:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "auth_id": "tempest-cephx-id-1938692308", "format": "json"}]: dispatch Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1938692308, format:json, prefix:fs subvolume deauthorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 167 KiB/s wr, 48 op/s Oct 5 06:11:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:11:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:11:57 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} v 0) Oct 5 06:11:57 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} : dispatch Oct 5 06:11:57 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} v 0) Oct 5 06:11:57 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} : dispatch Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1938692308, format:json, prefix:fs subvolume deauthorize, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "auth_id": "tempest-cephx-id-1938692308", "format": "json"}]: dispatch Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1938692308, format:json, prefix:fs subvolume evict, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1938692308, client_metadata.root=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391 Oct 5 06:11:57 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=tempest-cephx-id-1938692308,client_metadata.root=/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15/645548e5-c873-4781-9a72-d3c7a98ca391],prefix=session evict} (starting...) Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1938692308, format:json, prefix:fs subvolume evict, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:11:57 localhost podman[337791]: 2025-10-05 10:11:57.930438855 +0000 UTC m=+0.091143073 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:11:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:11:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:57 localhost systemd[1]: tmp-crun.QwujeQ.mount: Deactivated successfully. Oct 5 06:11:57 localhost podman[337790]: 2025-10-05 10:11:57.996737673 +0000 UTC m=+0.159634819 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:11:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:11:58 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:11:58 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:58 localhost podman[337790]: 2025-10-05 10:11:58.011223295 +0000 UTC m=+0.174120491 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, org.label-schema.vendor=CentOS) Oct 5 06:11:58 localhost podman[337791]: 2025-10-05 10:11:58.019103658 +0000 UTC m=+0.179807906 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:11:58 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:11:58 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:11:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:58 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:11:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:11:58 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:11:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:11:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1938692308", "format": "json"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1938692308"}]': finished Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:11:58 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:11:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:11:59 localhost sshd[337832]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:11:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "format": "json"}]: dispatch Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0fda86af-a18e-4ae6-b70c-d418b3495977, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0fda86af-a18e-4ae6-b70c-d418b3495977, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:59.531+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0fda86af-a18e-4ae6-b70c-d418b3495977' of type subvolume Oct 5 06:11:59 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0fda86af-a18e-4ae6-b70c-d418b3495977' of type subvolume Oct 5 06:11:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0fda86af-a18e-4ae6-b70c-d418b3495977", "force": true, "format": "json"}]: dispatch Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0fda86af-a18e-4ae6-b70c-d418b3495977'' moved to trashcan Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0fda86af-a18e-4ae6-b70c-d418b3495977, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 118 KiB/s wr, 42 op/s Oct 5 06:11:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cbb8776-0b38-4328-9be6-36f3883bb770", "format": "json"}]: dispatch Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cbb8776-0b38-4328-9be6-36f3883bb770, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cbb8776-0b38-4328-9be6-36f3883bb770, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:11:59.950+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7cbb8776-0b38-4328-9be6-36f3883bb770' of type subvolume Oct 5 06:11:59 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7cbb8776-0b38-4328-9be6-36f3883bb770' of type subvolume Oct 5 06:11:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7cbb8776-0b38-4328-9be6-36f3883bb770", "force": true, "format": "json"}]: dispatch Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cbb8776-0b38-4328-9be6-36f3883bb770, vol_name:cephfs) < "" Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7cbb8776-0b38-4328-9be6-36f3883bb770'' moved to trashcan Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:11:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cbb8776-0b38-4328-9be6-36f3883bb770, vol_name:cephfs) < "" Oct 5 06:12:00 localhost nova_compute[297130]: 2025-10-05 10:12:00.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:12:01 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:01 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "auth_id": "Joe", "format": "json"}]: dispatch Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:12:01 localhost nova_compute[297130]: 2025-10-05 10:12:01.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Oct 5 06:12:01 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Oct 5 06:12:01 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:12:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "auth_id": "Joe", "format": "json"}]: dispatch Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba Oct 5 06:12:01 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc/37367bdc-4d77-4378-8d6b-e5270c223aba],prefix=session evict} (starting...) Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:12:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 118 KiB/s wr, 42 op/s Oct 5 06:12:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:12:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Oct 5 06:12:01 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Oct 5 06:12:01 localhost systemd[1]: tmp-crun.CH73as.mount: Deactivated successfully. Oct 5 06:12:01 localhost podman[337837]: 2025-10-05 10:12:01.904444239 +0000 UTC m=+0.072111405 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0) Oct 5 06:12:01 localhost podman[337836]: 2025-10-05 10:12:01.920564547 +0000 UTC m=+0.087615846 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.license=GPLv2) Oct 5 06:12:01 localhost podman[337836]: 2025-10-05 10:12:01.952744 +0000 UTC m=+0.119795249 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:12:01 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:12:01 localhost podman[337837]: 2025-10-05 10:12:01.97415384 +0000 UTC m=+0.141821026 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 06:12:02 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:12:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 188 KiB/s wr, 50 op/s Oct 5 06:12:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "auth_id": "admin", "tenant_id": "aa3c1cdcd58e40eab93d64ef314bf089", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, tenant_id:aa3c1cdcd58e40eab93d64ef314bf089, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Oct 5 06:12:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, tenant_id:aa3c1cdcd58e40eab93d64ef314bf089, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:04.386+0000 7f417fc90640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Oct 5 06:12:04 localhost ceph-mgr[301363]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Oct 5 06:12:04 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "430d42c2-2f3c-4249-b59d-5cc916e70e33", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/430d42c2-2f3c-4249-b59d-5cc916e70e33/.meta.tmp' Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/430d42c2-2f3c-4249-b59d-5cc916e70e33/.meta.tmp' to config b'/volumes/_nogroup/430d42c2-2f3c-4249-b59d-5cc916e70e33/.meta' Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "430d42c2-2f3c-4249-b59d-5cc916e70e33", "format": "json"}]: dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:12:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:12:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:04 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:05 localhost nova_compute[297130]: 2025-10-05 10:12:05.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:05 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:05 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:05 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:05 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 106 KiB/s wr, 26 op/s Oct 5 06:12:06 localhost nova_compute[297130]: 2025-10-05 10:12:06.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 134 KiB/s wr, 44 op/s Oct 5 06:12:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "auth_id": "david", "tenant_id": "aa3c1cdcd58e40eab93d64ef314bf089", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, tenant_id:aa3c1cdcd58e40eab93d64ef314bf089, vol_name:cephfs) < "" Oct 5 06:12:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Oct 5 06:12:07 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:07 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID david with tenant aa3c1cdcd58e40eab93d64ef314bf089 Oct 5 06:12:07 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:07 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:07 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, tenant_id:aa3c1cdcd58e40eab93d64ef314bf089, vol_name:cephfs) < "" Oct 5 06:12:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:08 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:08 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e", "osd", "allow rw pool=manila_data namespace=fsvolumens_53de1133-5804-40f0-a972-8bad9c13f1cc", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "430d42c2-2f3c-4249-b59d-5cc916e70e33", "format": "json"}]: dispatch Oct 5 06:12:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:09 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:09.390+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '430d42c2-2f3c-4249-b59d-5cc916e70e33' of type subvolume Oct 5 06:12:09 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '430d42c2-2f3c-4249-b59d-5cc916e70e33' of type subvolume Oct 5 06:12:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "430d42c2-2f3c-4249-b59d-5cc916e70e33", "force": true, "format": "json"}]: dispatch Oct 5 06:12:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, vol_name:cephfs) < "" Oct 5 06:12:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/430d42c2-2f3c-4249-b59d-5cc916e70e33'' moved to trashcan Oct 5 06:12:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:430d42c2-2f3c-4249-b59d-5cc916e70e33, vol_name:cephfs) < "" Oct 5 06:12:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 98 KiB/s wr, 25 op/s Oct 5 06:12:10 localhost nova_compute[297130]: 2025-10-05 10:12:10.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e5a56fcc-7ee2-4018-8334-bfe3749856da/.meta.tmp' Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e5a56fcc-7ee2-4018-8334-bfe3749856da/.meta.tmp' to config b'/volumes/_nogroup/e5a56fcc-7ee2-4018-8334-bfe3749856da/.meta' Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "format": "json"}]: dispatch Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:11 localhost nova_compute[297130]: 2025-10-05 10:12:11.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:12:11 Oct 5 06:12:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:12:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:12:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'manila_data', 'images', 'manila_metadata', 'backups'] Oct 5 06:12:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:12:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:11 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:12:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:12:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 98 KiB/s wr, 25 op/s Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-06 of space, bias 1.0, pg target 0.0005425347222222222 quantized to 32 (current 32) Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:12:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0007881748150474595 of space, bias 4.0, pg target 0.6273871527777778 quantized to 16 (current 16) Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:12:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:12:12 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:12 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:12 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:12 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:12:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:12:12 localhost podman[337885]: 2025-10-05 10:12:12.906661439 +0000 UTC m=+0.078639753 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:12:12 localhost podman[337885]: 2025-10-05 10:12:12.917076412 +0000 UTC m=+0.089054716 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 06:12:12 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:12:13 localhost podman[337886]: 2025-10-05 10:12:13.010999388 +0000 UTC m=+0.179302183 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:12:13 localhost podman[337886]: 2025-10-05 10:12:13.021097722 +0000 UTC m=+0.189400507 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:12:13 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:12:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 153 KiB/s wr, 31 op/s Oct 5 06:12:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8574b65-5dbc-4037-af0e-5a3e47ec1613", "format": "json"}]: dispatch Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8574b65-5dbc-4037-af0e-5a3e47ec1613' of type subvolume Oct 5 06:12:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:14.323+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8574b65-5dbc-4037-af0e-5a3e47ec1613' of type subvolume Oct 5 06:12:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8574b65-5dbc-4037-af0e-5a3e47ec1613", "force": true, "format": "json"}]: dispatch Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b8574b65-5dbc-4037-af0e-5a3e47ec1613'' moved to trashcan Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8574b65-5dbc-4037-af0e-5a3e47ec1613, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "auth_id": "david", "tenant_id": "30d95dfaad874b1a95bc3775adb2dbc3", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, tenant_id:30d95dfaad874b1a95bc3775adb2dbc3, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Oct 5 06:12:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, tenant_id:30d95dfaad874b1a95bc3775adb2dbc3, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:14.818+0000 7f417fc90640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Oct 5 06:12:14 localhost ceph-mgr[301363]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Oct 5 06:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:12:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:14 localhost podman[337924]: 2025-10-05 10:12:14.919696978 +0000 UTC m=+0.085940371 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc.) Oct 5 06:12:14 localhost podman[337924]: 2025-10-05 10:12:14.95776058 +0000 UTC m=+0.124003953 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7) Oct 5 06:12:14 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:12:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:15 localhost nova_compute[297130]: 2025-10-05 10:12:15.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 83 KiB/s wr, 23 op/s Oct 5 06:12:16 localhost nova_compute[297130]: 2025-10-05 10:12:16.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:16 localhost openstack_network_exporter[250246]: ERROR 10:12:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:16 localhost openstack_network_exporter[250246]: ERROR 10:12:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:16 localhost openstack_network_exporter[250246]: ERROR 10:12:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:12:16 localhost openstack_network_exporter[250246]: ERROR 10:12:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:12:16 localhost openstack_network_exporter[250246]: Oct 5 06:12:16 localhost openstack_network_exporter[250246]: ERROR 10:12:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:12:16 localhost openstack_network_exporter[250246]: Oct 5 06:12:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 114 KiB/s wr, 28 op/s Oct 5 06:12:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:12:17 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1318766102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:12:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:12:17 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1318766102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:12:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "auth_id": "david", "format": "json"}]: dispatch Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'e5a56fcc-7ee2-4018-8334-bfe3749856da' Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "auth_id": "david", "format": "json"}]: dispatch Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/e5a56fcc-7ee2-4018-8334-bfe3749856da/7120417b-c7eb-49eb-931d-0f39ec505505 Oct 5 06:12:18 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/e5a56fcc-7ee2-4018-8334-bfe3749856da/7120417b-c7eb-49eb-931d-0f39ec505505],prefix=session evict} (starting...) Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:18 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:18 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:18 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:19 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:19 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:19 localhost nova_compute[297130]: 2025-10-05 10:12:19.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:19 localhost nova_compute[297130]: 2025-10-05 10:12:19.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:12:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2996933591' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:12:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:12:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2996933591' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:12:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 86 KiB/s wr, 10 op/s Oct 5 06:12:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:12:19 localhost podman[337946]: 2025-10-05 10:12:19.91451936 +0000 UTC m=+0.082508677 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:12:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:12:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/831380807' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:12:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:12:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/831380807' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:12:19 localhost podman[337946]: 2025-10-05 10:12:19.949297524 +0000 UTC m=+0.117286861 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Oct 5 06:12:19 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:12:20 localhost nova_compute[297130]: 2025-10-05 10:12:20.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:12:20.408 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:12:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:12:20.409 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:12:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:12:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:12:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "auth_id": "david", "format": "json"}]: dispatch Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Oct 5 06:12:21 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Oct 5 06:12:21 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "auth_id": "david", "format": "json"}]: dispatch Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e Oct 5 06:12:21 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc/5f874439-8392-4e89-b082-e5e16260ec6e],prefix=session evict} (starting...) Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:21 localhost nova_compute[297130]: 2025-10-05 10:12:21.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:12:21 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:21 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:21 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:21 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 86 KiB/s wr, 10 op/s Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:22 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.297 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.298 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.298 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.330 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.331 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.331 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.331 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.332 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:12:23 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:12:23 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2468140289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.792 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:12:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 134 KiB/s wr, 29 op/s Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.962 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.963 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11511MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.963 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:12:23 localhost nova_compute[297130]: 2025-10-05 10:12:23.964 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:12:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:24 localhost nova_compute[297130]: 2025-10-05 10:12:24.347 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:12:24 localhost nova_compute[297130]: 2025-10-05 10:12:24.347 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:12:24 localhost nova_compute[297130]: 2025-10-05 10:12:24.819 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:12:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:12:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:12:24 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:12:24 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:12:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:24 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:25 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:25 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:25 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:25 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:12:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:12:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/689678042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.273 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.280 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.304 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.307 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.308 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.309 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.309 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:12:25 localhost nova_compute[297130]: 2025-10-05 10:12:25.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "format": "json"}]: dispatch Oct 5 06:12:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:25 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:25.602+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e5a56fcc-7ee2-4018-8334-bfe3749856da' of type subvolume Oct 5 06:12:25 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e5a56fcc-7ee2-4018-8334-bfe3749856da' of type subvolume Oct 5 06:12:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e5a56fcc-7ee2-4018-8334-bfe3749856da", "force": true, "format": "json"}]: dispatch Oct 5 06:12:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e5a56fcc-7ee2-4018-8334-bfe3749856da'' moved to trashcan Oct 5 06:12:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e5a56fcc-7ee2-4018-8334-bfe3749856da, vol_name:cephfs) < "" Oct 5 06:12:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 79 KiB/s wr, 23 op/s Oct 5 06:12:26 localhost podman[248157]: time="2025-10-05T10:12:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:12:26 localhost podman[248157]: @ - - [05/Oct/2025:10:12:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:12:26 localhost podman[248157]: @ - - [05/Oct/2025:10:12:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19374 "" "Go-http-client/1.1" Oct 5 06:12:26 localhost nova_compute[297130]: 2025-10-05 10:12:26.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:26 localhost nova_compute[297130]: 2025-10-05 10:12:26.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:26 localhost nova_compute[297130]: 2025-10-05 10:12:26.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:26 localhost nova_compute[297130]: 2025-10-05 10:12:26.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:27 localhost nova_compute[297130]: 2025-10-05 10:12:27.283 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 102 KiB/s wr, 25 op/s Oct 5 06:12:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:12:28 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:28 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:28 localhost nova_compute[297130]: 2025-10-05 10:12:28.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:28 localhost nova_compute[297130]: 2025-10-05 10:12:28.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:12:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:28 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "format": "json"}]: dispatch Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:28 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:28.756+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3049662-1f6a-47c4-9ebc-be7eeb68ea15' of type subvolume Oct 5 06:12:28 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd3049662-1f6a-47c4-9ebc-be7eeb68ea15' of type subvolume Oct 5 06:12:28 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3049662-1f6a-47c4-9ebc-be7eeb68ea15", "force": true, "format": "json"}]: dispatch Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d3049662-1f6a-47c4-9ebc-be7eeb68ea15'' moved to trashcan Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:28 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d3049662-1f6a-47c4-9ebc-be7eeb68ea15, vol_name:cephfs) < "" Oct 5 06:12:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:12:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:12:28 localhost podman[338011]: 2025-10-05 10:12:28.918483961 +0000 UTC m=+0.081853061 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Oct 5 06:12:28 localhost systemd[1]: tmp-crun.BHmn56.mount: Deactivated successfully. Oct 5 06:12:28 localhost podman[338012]: 2025-10-05 10:12:28.976979756 +0000 UTC m=+0.133977243 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:12:28 localhost podman[338012]: 2025-10-05 10:12:28.98592734 +0000 UTC m=+0.142924827 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:12:28 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:12:29 localhost podman[338011]: 2025-10-05 10:12:29.005435798 +0000 UTC m=+0.168804918 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:12:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:29 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:12:29 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:29 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:29 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:29 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 72 KiB/s wr, 21 op/s Oct 5 06:12:30 localhost nova_compute[297130]: 2025-10-05 10:12:30.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:31 localhost nova_compute[297130]: 2025-10-05 10:12:31.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:12:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:12:31 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 72 KiB/s wr, 21 op/s Oct 5 06:12:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:12:31 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:12:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:31 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:31 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "format": "json"}]: dispatch Oct 5 06:12:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:32 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c3ed1b3-843c-400f-bb43-1c34240712dc' of type subvolume Oct 5 06:12:32 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:32.202+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c3ed1b3-843c-400f-bb43-1c34240712dc' of type subvolume Oct 5 06:12:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c3ed1b3-843c-400f-bb43-1c34240712dc", "force": true, "format": "json"}]: dispatch Oct 5 06:12:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:12:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6c3ed1b3-843c-400f-bb43-1c34240712dc'' moved to trashcan Oct 5 06:12:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c3ed1b3-843c-400f-bb43-1c34240712dc, vol_name:cephfs) < "" Oct 5 06:12:32 localhost nova_compute[297130]: 2025-10-05 10:12:32.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:12:32 localhost nova_compute[297130]: 2025-10-05 10:12:32.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:12:32 localhost nova_compute[297130]: 2025-10-05 10:12:32.303 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:12:32 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:12:32 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:32 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:12:32 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:12:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:12:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:12:32 localhost systemd[1]: tmp-crun.ZC0rMA.mount: Deactivated successfully. Oct 5 06:12:32 localhost podman[338053]: 2025-10-05 10:12:32.897414461 +0000 UTC m=+0.070340958 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:12:32 localhost podman[338054]: 2025-10-05 10:12:32.907925116 +0000 UTC m=+0.074085300 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible) Oct 5 06:12:32 localhost podman[338053]: 2025-10-05 10:12:32.932085691 +0000 UTC m=+0.105012168 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid) Oct 5 06:12:33 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:12:33 localhost podman[338054]: 2025-10-05 10:12:33.039197404 +0000 UTC m=+0.205357578 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Oct 5 06:12:33 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:12:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0270f20-2efb-4356-91ed-32c0d6529473", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0270f20-2efb-4356-91ed-32c0d6529473, vol_name:cephfs) < "" Oct 5 06:12:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0270f20-2efb-4356-91ed-32c0d6529473/.meta.tmp' Oct 5 06:12:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0270f20-2efb-4356-91ed-32c0d6529473/.meta.tmp' to config b'/volumes/_nogroup/a0270f20-2efb-4356-91ed-32c0d6529473/.meta' Oct 5 06:12:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0270f20-2efb-4356-91ed-32c0d6529473, vol_name:cephfs) < "" Oct 5 06:12:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0270f20-2efb-4356-91ed-32c0d6529473", "format": "json"}]: dispatch Oct 5 06:12:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0270f20-2efb-4356-91ed-32c0d6529473, vol_name:cephfs) < "" Oct 5 06:12:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0270f20-2efb-4356-91ed-32c0d6529473, vol_name:cephfs) < "" Oct 5 06:12:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 108 KiB/s wr, 25 op/s Oct 5 06:12:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:12:35 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:35 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:35 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:35 localhost nova_compute[297130]: 2025-10-05 10:12:35.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:35 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:35 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "auth_id": "admin", "format": "json"}]: dispatch Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:35.607+0000 7f417fc90640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Oct 5 06:12:35 localhost ceph-mgr[301363]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Oct 5 06:12:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "format": "json"}]: dispatch Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:53de1133-5804-40f0-a972-8bad9c13f1cc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:53de1133-5804-40f0-a972-8bad9c13f1cc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:35.711+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53de1133-5804-40f0-a972-8bad9c13f1cc' of type subvolume Oct 5 06:12:35 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53de1133-5804-40f0-a972-8bad9c13f1cc' of type subvolume Oct 5 06:12:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53de1133-5804-40f0-a972-8bad9c13f1cc", "force": true, "format": "json"}]: dispatch Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/53de1133-5804-40f0-a972-8bad9c13f1cc'' moved to trashcan Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53de1133-5804-40f0-a972-8bad9c13f1cc, vol_name:cephfs) < "" Oct 5 06:12:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 59 KiB/s wr, 7 op/s Oct 5 06:12:36 localhost nova_compute[297130]: 2025-10-05 10:12:36.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0270f20-2efb-4356-91ed-32c0d6529473", "format": "json"}]: dispatch Oct 5 06:12:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a0270f20-2efb-4356-91ed-32c0d6529473, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a0270f20-2efb-4356-91ed-32c0d6529473, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:37 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:37.471+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0270f20-2efb-4356-91ed-32c0d6529473' of type subvolume Oct 5 06:12:37 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0270f20-2efb-4356-91ed-32c0d6529473' of type subvolume Oct 5 06:12:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0270f20-2efb-4356-91ed-32c0d6529473", "force": true, "format": "json"}]: dispatch Oct 5 06:12:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0270f20-2efb-4356-91ed-32c0d6529473, vol_name:cephfs) < "" Oct 5 06:12:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a0270f20-2efb-4356-91ed-32c0d6529473'' moved to trashcan Oct 5 06:12:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0270f20-2efb-4356-91ed-32c0d6529473, vol_name:cephfs) < "" Oct 5 06:12:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:12:37 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:12:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:12:37 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:12:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:12:37 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 5f5cdf5a-2fa1-4023-a597-87588084afaf (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:12:37 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 5f5cdf5a-2fa1-4023-a597-87588084afaf (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:12:37 localhost ceph-mgr[301363]: [progress INFO root] Completed event 5f5cdf5a-2fa1-4023-a597-87588084afaf (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:12:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:12:37 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:12:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 98 KiB/s wr, 11 op/s Oct 5 06:12:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:12:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:38 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:12:38 localhost systemd-journald[48149]: Data hash table of /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Oct 5 06:12:38 localhost systemd-journald[48149]: /run/log/journal/19f34a97e4e878e70ef0e6e08186acc9/system.journal: Journal header limits reached or header out-of-date, rotating. Oct 5 06:12:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 06:12:38 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:12:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:12:38 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:12:38 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:38 localhost rsyslogd[759]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Oct 5 06:12:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:12:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:38 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.886 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:12:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:12:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:39 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:39 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:39 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:39 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 75 KiB/s wr, 8 op/s Oct 5 06:12:40 localhost nova_compute[297130]: 2025-10-05 10:12:40.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, vol_name:cephfs) < "" Oct 5 06:12:40 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8/.meta.tmp' Oct 5 06:12:40 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8/.meta.tmp' to config b'/volumes/_nogroup/e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8/.meta' Oct 5 06:12:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, vol_name:cephfs) < "" Oct 5 06:12:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8", "format": "json"}]: dispatch Oct 5 06:12:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, vol_name:cephfs) < "" Oct 5 06:12:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, vol_name:cephfs) < "" Oct 5 06:12:41 localhost nova_compute[297130]: 2025-10-05 10:12:41.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bc73ac49-4b55-4bc1-a0d4-46c7cd262105", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, vol_name:cephfs) < "" Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bc73ac49-4b55-4bc1-a0d4-46c7cd262105/.meta.tmp' Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bc73ac49-4b55-4bc1-a0d4-46c7cd262105/.meta.tmp' to config b'/volumes/_nogroup/bc73ac49-4b55-4bc1-a0d4-46c7cd262105/.meta' Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, vol_name:cephfs) < "" Oct 5 06:12:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bc73ac49-4b55-4bc1-a0d4-46c7cd262105", "format": "json"}]: dispatch Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, vol_name:cephfs) < "" Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, vol_name:cephfs) < "" Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:12:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:12:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:41 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 75 KiB/s wr, 8 op/s Oct 5 06:12:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:41 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:12:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:12:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:12:42.201 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:12:42 localhost nova_compute[297130]: 2025-10-05 10:12:42.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:42 localhost ovn_metadata_agent[163196]: 2025-10-05 10:12:42.203 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:12:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:12:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 117 KiB/s wr, 14 op/s Oct 5 06:12:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:12:43 localhost podman[338189]: 2025-10-05 10:12:43.913220648 +0000 UTC m=+0.078562421 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:12:43 localhost podman[338189]: 2025-10-05 10:12:43.953328985 +0000 UTC m=+0.118670748 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:12:43 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:12:43 localhost podman[338188]: 2025-10-05 10:12:43.980168563 +0000 UTC m=+0.145686621 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Oct 5 06:12:43 localhost podman[338188]: 2025-10-05 10:12:43.990273198 +0000 UTC m=+0.155791206 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Oct 5 06:12:44 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:12:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8", "format": "json"}]: dispatch Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:44 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:44.243+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8' of type subvolume Oct 5 06:12:44 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8' of type subvolume Oct 5 06:12:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8", "force": true, "format": "json"}]: dispatch Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, vol_name:cephfs) < "" Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8'' moved to trashcan Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2fd3afd-d5d9-4b2c-be0c-698c9aaf26e8, vol_name:cephfs) < "" Oct 5 06:12:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:12:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:12:45 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:12:45 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:12:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:45 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:45 localhost nova_compute[297130]: 2025-10-05 10:12:45.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 81 KiB/s wr, 10 op/s Oct 5 06:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:12:45 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:12:45 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:12:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:12:45 localhost podman[338233]: 2025-10-05 10:12:45.923887672 +0000 UTC m=+0.085333055 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:12:45 localhost podman[338233]: 2025-10-05 10:12:45.943204896 +0000 UTC m=+0.104650289 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git) Oct 5 06:12:45 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:12:46 localhost nova_compute[297130]: 2025-10-05 10:12:46.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:46 localhost openstack_network_exporter[250246]: ERROR 10:12:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:12:46 localhost openstack_network_exporter[250246]: ERROR 10:12:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:46 localhost openstack_network_exporter[250246]: ERROR 10:12:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:12:46 localhost openstack_network_exporter[250246]: ERROR 10:12:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:12:46 localhost openstack_network_exporter[250246]: Oct 5 06:12:46 localhost openstack_network_exporter[250246]: ERROR 10:12:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:12:46 localhost openstack_network_exporter[250246]: Oct 5 06:12:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bc73ac49-4b55-4bc1-a0d4-46c7cd262105", "format": "json"}]: dispatch Oct 5 06:12:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:12:47 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc73ac49-4b55-4bc1-a0d4-46c7cd262105' of type subvolume Oct 5 06:12:47 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:12:47.393+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bc73ac49-4b55-4bc1-a0d4-46c7cd262105' of type subvolume Oct 5 06:12:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bc73ac49-4b55-4bc1-a0d4-46c7cd262105", "force": true, "format": "json"}]: dispatch Oct 5 06:12:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, vol_name:cephfs) < "" Oct 5 06:12:47 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bc73ac49-4b55-4bc1-a0d4-46c7cd262105'' moved to trashcan Oct 5 06:12:47 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:12:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bc73ac49-4b55-4bc1-a0d4-46c7cd262105, vol_name:cephfs) < "" Oct 5 06:12:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 114 KiB/s wr, 14 op/s Oct 5 06:12:47 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Oct 5 06:12:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:12:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:48 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:48 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e237 e237: 6 total, 6 up, 6 in Oct 5 06:12:49 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:49 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:49 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:49 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 90 KiB/s wr, 12 op/s Oct 5 06:12:50 localhost nova_compute[297130]: 2025-10-05 10:12:50.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:12:50 localhost podman[338254]: 2025-10-05 10:12:50.915281463 +0000 UTC m=+0.080384441 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Oct 5 06:12:50 localhost podman[338254]: 2025-10-05 10:12:50.947363182 +0000 UTC m=+0.112466350 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:12:50 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:12:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e238 e238: 6 total, 6 up, 6 in Oct 5 06:12:51 localhost nova_compute[297130]: 2025-10-05 10:12:51.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:51 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 206 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 50 KiB/s wr, 6 op/s Oct 5 06:12:52 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:52 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:52 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:52 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:52 localhost ovn_metadata_agent[163196]: 2025-10-05 10:12:52.204 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:12:52 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e239 e239: 6 total, 6 up, 6 in Oct 5 06:12:53 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:12:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 64 KiB/s wr, 92 op/s Oct 5 06:12:53 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta.tmp' Oct 5 06:12:53 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta.tmp' to config b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta' Oct 5 06:12:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:12:53 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "format": "json"}]: dispatch Oct 5 06:12:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:12:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:12:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e240 e240: 6 total, 6 up, 6 in Oct 5 06:12:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:12:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:54 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:54 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:12:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:12:54 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:12:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:12:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:12:55 localhost nova_compute[297130]: 2025-10-05 10:12:55.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 64 KiB/s wr, 92 op/s Oct 5 06:12:56 localhost podman[248157]: time="2025-10-05T10:12:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:12:56 localhost podman[248157]: @ - - [05/Oct/2025:10:12:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:12:56 localhost podman[248157]: @ - - [05/Oct/2025:10:12:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19374 "" "Go-http-client/1.1" Oct 5 06:12:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:12:56 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1641135178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:12:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:12:56 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1641135178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:12:56 localhost nova_compute[297130]: 2025-10-05 10:12:56.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:12:56 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e241 e241: 6 total, 6 up, 6 in Oct 5 06:12:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "snap_name": "30ad7415-a54c-4ef6-a40f-33641f1edd8b", "format": "json"}]: dispatch Oct 5 06:12:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:30ad7415-a54c-4ef6-a40f-33641f1edd8b, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:12:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:30ad7415-a54c-4ef6-a40f-33641f1edd8b, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:12:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "cdbaff97-4f99-494f-ac1d-2d7755d0e42e", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:12:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:cdbaff97-4f99-494f-ac1d-2d7755d0e42e, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:12:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:cdbaff97-4f99-494f-ac1d-2d7755d0e42e, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:12:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 136 KiB/s wr, 175 op/s Oct 5 06:12:58 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:12:58 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:12:58 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:58 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:12:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:12:58 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:12:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:12:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:12:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:12:59 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:12:59 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:59 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:12:59 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:12:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:12:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:12:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 64 KiB/s wr, 73 op/s Oct 5 06:12:59 localhost podman[338274]: 2025-10-05 10:12:59.912100978 +0000 UTC m=+0.080649698 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:12:59 localhost podman[338274]: 2025-10-05 10:12:59.927149677 +0000 UTC m=+0.095698397 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Oct 5 06:12:59 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:13:00 localhost systemd[1]: tmp-crun.QnCbac.mount: Deactivated successfully. Oct 5 06:13:00 localhost podman[338275]: 2025-10-05 10:13:00.026621613 +0000 UTC m=+0.191737249 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:13:00 localhost podman[338275]: 2025-10-05 10:13:00.064394048 +0000 UTC m=+0.229509654 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:13:00 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:13:00 localhost nova_compute[297130]: 2025-10-05 10:13:00.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:00 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a76aa2cf-fd04-4790-b2cd-644f0c1a0627", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a76aa2cf-fd04-4790-b2cd-644f0c1a0627, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a76aa2cf-fd04-4790-b2cd-644f0c1a0627, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta.tmp' Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta.tmp' to config b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta' Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "format": "json"}]: dispatch Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:01 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:01 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:01 localhost nova_compute[297130]: 2025-10-05 10:13:01.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:01 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:01 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 54 KiB/s wr, 62 op/s Oct 5 06:13:01 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e242 e242: 6 total, 6 up, 6 in Oct 5 06:13:02 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:02 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:02 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:02 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:03 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "snap_name": "30ad7415-a54c-4ef6-a40f-33641f1edd8b_07a3c9cd-4723-4c5e-b203-29ac167519a9", "force": true, "format": "json"}]: dispatch Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:30ad7415-a54c-4ef6-a40f-33641f1edd8b_07a3c9cd-4723-4c5e-b203-29ac167519a9, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta.tmp' Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta.tmp' to config b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta' Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:30ad7415-a54c-4ef6-a40f-33641f1edd8b_07a3c9cd-4723-4c5e-b203-29ac167519a9, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:13:03 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "snap_name": "30ad7415-a54c-4ef6-a40f-33641f1edd8b", "force": true, "format": "json"}]: dispatch Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:30ad7415-a54c-4ef6-a40f-33641f1edd8b, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta.tmp' Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta.tmp' to config b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff/.meta' Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:30ad7415-a54c-4ef6-a40f-33641f1edd8b, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:13:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 112 KiB/s wr, 68 op/s Oct 5 06:13:03 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a76aa2cf-fd04-4790-b2cd-644f0c1a0627", "force": true, "format": "json"}]: dispatch Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a76aa2cf-fd04-4790-b2cd-644f0c1a0627, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a76aa2cf-fd04-4790-b2cd-644f0c1a0627, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:03 localhost podman[338317]: 2025-10-05 10:13:03.923319204 +0000 UTC m=+0.085091848 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:13:04 localhost podman[338316]: 2025-10-05 10:13:04.008632137 +0000 UTC m=+0.174123243 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:13:04 localhost podman[338317]: 2025-10-05 10:13:04.016067278 +0000 UTC m=+0.177839882 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true) Oct 5 06:13:04 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:13:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:04 localhost podman[338316]: 2025-10-05 10:13:04.043122282 +0000 UTC m=+0.208613388 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true) Oct 5 06:13:04 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:13:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "snap_name": "aa3112c1-4162-409b-9130-f3710a0dd16d", "format": "json"}]: dispatch Oct 5 06:13:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa3112c1-4162-409b-9130-f3710a0dd16d, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa3112c1-4162-409b-9130-f3710a0dd16d, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:04 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:04 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:04 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:05 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d2def69-cd59-4372-a79a-32416a148d1e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d2def69-cd59-4372-a79a-32416a148d1e, vol_name:cephfs) < "" Oct 5 06:13:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:13:05.080 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:13:04Z, description=, device_id=28ab91cf-71d3-4e23-adad-e3e1723928c7, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=045c26d0-d888-4de5-bb78-ea986f75c387, ip_allocation=immediate, mac_address=fa:16:3e:2e:d4:34, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3601, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:13:04Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d2def69-cd59-4372-a79a-32416a148d1e/.meta.tmp' Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d2def69-cd59-4372-a79a-32416a148d1e/.meta.tmp' to config b'/volumes/_nogroup/8d2def69-cd59-4372-a79a-32416a148d1e/.meta' Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d2def69-cd59-4372-a79a-32416a148d1e, vol_name:cephfs) < "" Oct 5 06:13:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d2def69-cd59-4372-a79a-32416a148d1e", "format": "json"}]: dispatch Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d2def69-cd59-4372-a79a-32416a148d1e, vol_name:cephfs) < "" Oct 5 06:13:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d2def69-cd59-4372-a79a-32416a148d1e, vol_name:cephfs) < "" Oct 5 06:13:05 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:05 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:05 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:05 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:05 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:13:05 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:13:05 localhost podman[338376]: 2025-10-05 10:13:05.418726488 +0000 UTC m=+0.064826589 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Oct 5 06:13:05 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:13:05 localhost nova_compute[297130]: 2025-10-05 10:13:05.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:05 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:13:05.729 271653 INFO neutron.agent.dhcp.agent [None req-7f4dcd53-e69a-4181-b599-b6c2c0a0b8e9 - - - - - -] DHCP configuration for ports {'045c26d0-d888-4de5-bb78-ea986f75c387'} is completed#033[00m Oct 5 06:13:05 localhost nova_compute[297130]: 2025-10-05 10:13:05.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 101 KiB/s wr, 62 op/s Oct 5 06:13:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "format": "json"}]: dispatch Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:06 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:06.246+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '735a28d8-3af7-4700-b8c1-76a6ed9dcaff' of type subvolume Oct 5 06:13:06 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '735a28d8-3af7-4700-b8c1-76a6ed9dcaff' of type subvolume Oct 5 06:13:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "735a28d8-3af7-4700-b8c1-76a6ed9dcaff", "force": true, "format": "json"}]: dispatch Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/735a28d8-3af7-4700-b8c1-76a6ed9dcaff'' moved to trashcan Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:735a28d8-3af7-4700-b8c1-76a6ed9dcaff, vol_name:cephfs) < "" Oct 5 06:13:06 localhost nova_compute[297130]: 2025-10-05 10:13:06.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta' Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "format": "json"}]: dispatch Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "c901e797-7cf4-42b0-80cf-805c27e66194", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:c901e797-7cf4-42b0-80cf-805c27e66194, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:c901e797-7cf4-42b0-80cf-805c27e66194, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 101 KiB/s wr, 11 op/s Oct 5 06:13:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:08 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:08 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:08 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c4bc540c-771c-4831-bcff-529134141022", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c4bc540c-771c-4831-bcff-529134141022, vol_name:cephfs) < "" Oct 5 06:13:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:08 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c4bc540c-771c-4831-bcff-529134141022/.meta.tmp' Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4bc540c-771c-4831-bcff-529134141022/.meta.tmp' to config b'/volumes/_nogroup/c4bc540c-771c-4831-bcff-529134141022/.meta' Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c4bc540c-771c-4831-bcff-529134141022, vol_name:cephfs) < "" Oct 5 06:13:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4bc540c-771c-4831-bcff-529134141022", "format": "json"}]: dispatch Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4bc540c-771c-4831-bcff-529134141022, vol_name:cephfs) < "" Oct 5 06:13:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4bc540c-771c-4831-bcff-529134141022, vol_name:cephfs) < "" Oct 5 06:13:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:09 localhost nova_compute[297130]: 2025-10-05 10:13:09.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "snap_name": "aa3112c1-4162-409b-9130-f3710a0dd16d_b94e6f6d-ec50-4caf-b297-f4f1aca55951", "force": true, "format": "json"}]: dispatch Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa3112c1-4162-409b-9130-f3710a0dd16d_b94e6f6d-ec50-4caf-b297-f4f1aca55951, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta.tmp' Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta.tmp' to config b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta' Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa3112c1-4162-409b-9130-f3710a0dd16d_b94e6f6d-ec50-4caf-b297-f4f1aca55951, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "snap_name": "aa3112c1-4162-409b-9130-f3710a0dd16d", "force": true, "format": "json"}]: dispatch Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa3112c1-4162-409b-9130-f3710a0dd16d, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta.tmp' Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta.tmp' to config b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea/.meta' Oct 5 06:13:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa3112c1-4162-409b-9130-f3710a0dd16d, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e243 e243: 6 total, 6 up, 6 in Oct 5 06:13:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 126 KiB/s wr, 14 op/s Oct 5 06:13:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "snap_name": "ebdeb2fe-7721-4e6e-9c08-db570078c646", "format": "json"}]: dispatch Oct 5 06:13:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "c901e797-7cf4-42b0-80cf-805c27e66194", "force": true, "format": "json"}]: dispatch Oct 5 06:13:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c901e797-7cf4-42b0-80cf-805c27e66194, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c901e797-7cf4-42b0-80cf-805c27e66194, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:10 localhost nova_compute[297130]: 2025-10-05 10:13:10.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e244 e244: 6 total, 6 up, 6 in Oct 5 06:13:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:13:11 Oct 5 06:13:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:13:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:13:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['vms', '.mgr', 'manila_metadata', 'backups', 'manila_data', 'volumes', 'images'] Oct 5 06:13:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:13:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4bc540c-771c-4831-bcff-529134141022", "format": "json"}]: dispatch Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4bc540c-771c-4831-bcff-529134141022, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4bc540c-771c-4831-bcff-529134141022, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:11.587+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c4bc540c-771c-4831-bcff-529134141022' of type subvolume Oct 5 06:13:11 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c4bc540c-771c-4831-bcff-529134141022' of type subvolume Oct 5 06:13:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4bc540c-771c-4831-bcff-529134141022", "force": true, "format": "json"}]: dispatch Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4bc540c-771c-4831-bcff-529134141022, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c4bc540c-771c-4831-bcff-529134141022'' moved to trashcan Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4bc540c-771c-4831-bcff-529134141022, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:13:11 localhost nova_compute[297130]: 2025-10-05 10:13:11.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:13:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:11 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:11 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 69 KiB/s wr, 7 op/s Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:11 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:13:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0011327797843383584 of space, bias 4.0, pg target 0.9016927083333331 quantized to 16 (current 16) Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:13:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:13:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e245 e245: 6 total, 6 up, 6 in Oct 5 06:13:12 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:12 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:12 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:12 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "format": "json"}]: dispatch Oct 5 06:13:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18383d43-f549-4a1c-ac16-c060cbee1cea, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18383d43-f549-4a1c-ac16-c060cbee1cea, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:12 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:12.553+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18383d43-f549-4a1c-ac16-c060cbee1cea' of type subvolume Oct 5 06:13:12 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18383d43-f549-4a1c-ac16-c060cbee1cea' of type subvolume Oct 5 06:13:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18383d43-f549-4a1c-ac16-c060cbee1cea", "force": true, "format": "json"}]: dispatch Oct 5 06:13:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18383d43-f549-4a1c-ac16-c060cbee1cea'' moved to trashcan Oct 5 06:13:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18383d43-f549-4a1c-ac16-c060cbee1cea, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e9c282cf-74e6-4266-8936-de88168024fb", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e9c282cf-74e6-4266-8936-de88168024fb, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e9c282cf-74e6-4266-8936-de88168024fb, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "snap_name": "ebdeb2fe-7721-4e6e-9c08-db570078c646", "target_sub_name": "b3216fc1-46ec-449b-978a-26f9fe2dc899", "format": "json"}]: dispatch Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, target_sub_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.clone_index] tracking-id a69c1509-adbb-42eb-9d97-d0c2546782c6 for path b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, target_sub_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3216fc1-46ec-449b-978a-26f9fe2dc899", "format": "json"}]: dispatch Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.737+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.737+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.737+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.737+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.737+0000 7f4185c9c640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899 Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, b3216fc1-46ec-449b-978a-26f9fe2dc899) Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.774+0000 7f418549b640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.774+0000 7f418549b640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.774+0000 7f418549b640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.774+0000 7f418549b640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:13.774+0000 7f418549b640 -1 client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: client.0 error registering admin socket command: (17) File exists Oct 5 06:13:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 209 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 133 KiB/s wr, 42 op/s Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, b3216fc1-46ec-449b-978a-26f9fe2dc899) -- by 0 seconds Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' Oct 5 06:13:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta' Oct 5 06:13:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:13:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:13:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:13:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:13:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:14 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.snap/ebdeb2fe-7721-4e6e-9c08-db570078c646/c2feb039-9384-49d2-bb0a-c88fc3131503' to b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/78dd326a-f149-4cd0-9353-4fcb0828190d' Oct 5 06:13:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:14 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:14 localhost podman[338424]: 2025-10-05 10:13:14.950987173 +0000 UTC m=+0.109301824 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:13:14 localhost podman[338424]: 2025-10-05 10:13:14.989729573 +0000 UTC m=+0.148044274 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:13:15 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:13:15 localhost podman[338423]: 2025-10-05 10:13:15.089375235 +0000 UTC m=+0.250548624 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:13:15 localhost podman[338423]: 2025-10-05 10:13:15.10502363 +0000 UTC m=+0.266197059 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:13:15 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.clone_index] untracking a69c1509-adbb-42eb-9d97-d0c2546782c6 Oct 5 06:13:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d2def69-cd59-4372-a79a-32416a148d1e", "format": "json"}]: dispatch Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d2def69-cd59-4372-a79a-32416a148d1e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta.tmp' to config b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899/.meta' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, b3216fc1-46ec-449b-978a-26f9fe2dc899) Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d2def69-cd59-4372-a79a-32416a148d1e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:15.365+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d2def69-cd59-4372-a79a-32416a148d1e' of type subvolume Oct 5 06:13:15 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d2def69-cd59-4372-a79a-32416a148d1e' of type subvolume Oct 5 06:13:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d2def69-cd59-4372-a79a-32416a148d1e", "force": true, "format": "json"}]: dispatch Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d2def69-cd59-4372-a79a-32416a148d1e, vol_name:cephfs) < "" Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d2def69-cd59-4372-a79a-32416a148d1e'' moved to trashcan Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d2def69-cd59-4372-a79a-32416a148d1e, vol_name:cephfs) < "" Oct 5 06:13:15 localhost nova_compute[297130]: 2025-10-05 10:13:15.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:15 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:15 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 209 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 126 KiB/s wr, 40 op/s Oct 5 06:13:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e9c282cf-74e6-4266-8936-de88168024fb", "force": true, "format": "json"}]: dispatch Oct 5 06:13:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e9c282cf-74e6-4266-8936-de88168024fb, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e9c282cf-74e6-4266-8936-de88168024fb, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:16 localhost openstack_network_exporter[250246]: ERROR 10:13:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:13:16 localhost openstack_network_exporter[250246]: ERROR 10:13:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:16 localhost openstack_network_exporter[250246]: ERROR 10:13:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:16 localhost nova_compute[297130]: 2025-10-05 10:13:16.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:16 localhost openstack_network_exporter[250246]: ERROR 10:13:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:13:16 localhost openstack_network_exporter[250246]: Oct 5 06:13:16 localhost openstack_network_exporter[250246]: ERROR 10:13:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:13:16 localhost openstack_network_exporter[250246]: Oct 5 06:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:13:16 localhost podman[338464]: 2025-10-05 10:13:16.902075212 +0000 UTC m=+0.069332521 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:13:16 localhost podman[338464]: 2025-10-05 10:13:16.914480398 +0000 UTC m=+0.081737757 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:13:16 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:13:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e246 e246: 6 total, 6 up, 6 in Oct 5 06:13:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 593 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 123 KiB/s rd, 61 MiB/s wr, 223 op/s Oct 5 06:13:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "8c157f62-8be3-4bb5-a40a-2487e1b42779", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:8c157f62-8be3-4bb5-a40a-2487e1b42779, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:8c157f62-8be3-4bb5-a40a-2487e1b42779, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:13:18 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:18 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:13:18 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:18 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:19 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:19 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:19 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:13:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 593 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 97 KiB/s rd, 48 MiB/s wr, 176 op/s Oct 5 06:13:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3216fc1-46ec-449b-978a-26f9fe2dc899", "format": "json"}]: dispatch Oct 5 06:13:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:20 localhost nova_compute[297130]: 2025-10-05 10:13:20.298 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:20 localhost nova_compute[297130]: 2025-10-05 10:13:20.324 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:13:20.409 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:13:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:13:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:13:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:13:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:13:20 localhost nova_compute[297130]: 2025-10-05 10:13:20.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:21 localhost nova_compute[297130]: 2025-10-05 10:13:21.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:21 localhost nova_compute[297130]: 2025-10-05 10:13:21.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:13:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 593 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 84 KiB/s rd, 41 MiB/s wr, 151 op/s Oct 5 06:13:21 localhost podman[338487]: 2025-10-05 10:13:21.927680829 +0000 UTC m=+0.098465491 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:13:21 localhost podman[338487]: 2025-10-05 10:13:21.938179974 +0000 UTC m=+0.108964646 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0) Oct 5 06:13:21 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.059945) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202060373, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2879, "num_deletes": 256, "total_data_size": 3698743, "memory_usage": 3794400, "flush_reason": "Manual Compaction"} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202079099, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 2406074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28578, "largest_seqno": 31451, "table_properties": {"data_size": 2394937, "index_size": 6746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 29546, "raw_average_key_size": 22, "raw_value_size": 2370244, "raw_average_value_size": 1802, "num_data_blocks": 290, "num_entries": 1315, "num_filter_entries": 1315, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659085, "oldest_key_time": 1759659085, "file_creation_time": 1759659202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18888 microseconds, and 6479 cpu microseconds. Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.079153) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 2406074 bytes OK Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.079175) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.082920) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.082944) EVENT_LOG_v1 {"time_micros": 1759659202082937, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.082967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3684802, prev total WAL file size 3684802, number of live WAL files 2. Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.083974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(2349KB)], [45(16MB)] Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202084045, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 19599068, "oldest_snapshot_seqno": -1} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 14333 keys, 18035615 bytes, temperature: kUnknown Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202220202, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18035615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17951663, "index_size": 47051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 384155, "raw_average_key_size": 26, "raw_value_size": 17706178, "raw_average_value_size": 1235, "num_data_blocks": 1756, "num_entries": 14333, "num_filter_entries": 14333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759659202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.220545) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18035615 bytes Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.241869) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.9 rd, 132.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 16.4 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(15.6) write-amplify(7.5) OK, records in: 14871, records dropped: 538 output_compression: NoCompression Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.241905) EVENT_LOG_v1 {"time_micros": 1759659202241890, "job": 26, "event": "compaction_finished", "compaction_time_micros": 136237, "compaction_time_cpu_micros": 59024, "output_level": 6, "num_output_files": 1, "total_output_size": 18035615, "num_input_records": 14871, "num_output_records": 14333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202242379, "job": 26, "event": "table_file_deletion", "file_number": 47} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659202244641, "job": 26, "event": "table_file_deletion", "file_number": 45} Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.083834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.244737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.244746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.244749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.244752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:22 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:22.244756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 123 KiB/s rd, 86 MiB/s wr, 223 op/s Oct 5 06:13:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:24 localhost nova_compute[297130]: 2025-10-05 10:13:24.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:24 localhost nova_compute[297130]: 2025-10-05 10:13:24.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3216fc1-46ec-449b-978a-26f9fe2dc899", "format": "json"}]: dispatch Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "cdbaff97-4f99-494f-ac1d-2d7755d0e42e", "force": true, "format": "json"}]: dispatch Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cdbaff97-4f99-494f-ac1d-2d7755d0e42e, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cdbaff97-4f99-494f-ac1d-2d7755d0e42e, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "8c157f62-8be3-4bb5-a40a-2487e1b42779", "force": true, "format": "json"}]: dispatch Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:8c157f62-8be3-4bb5-a40a-2487e1b42779, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:8c157f62-8be3-4bb5-a40a-2487e1b42779, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:13:24 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:24 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:24 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.273 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.286 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.287 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.304 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.304 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.305 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.305 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.306 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:13:25 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:25 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:25 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:25 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:13:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/341782301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.794 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:13:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 123 KiB/s rd, 86 MiB/s wr, 223 op/s Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.997 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:13:25 localhost nova_compute[297130]: 2025-10-05 10:13:25.999 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11491MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:25.999 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.000 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:13:26 localhost podman[248157]: time="2025-10-05T10:13:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:13:26 localhost podman[248157]: @ - - [05/Oct/2025:10:13:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:13:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61f54faa-3263-499e-b9fe-d27f219c60f3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:26 localhost podman[248157]: @ - - [05/Oct/2025:10:13:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19384 "" "Go-http-client/1.1" Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.079 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.079 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.107 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing inventories for resource provider 36221146-244b-49ab-8700-5471fa19d0c5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.128 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating ProviderTree inventory for provider 36221146-244b-49ab-8700-5471fa19d0c5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.128 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Updating inventory in ProviderTree for provider 36221146-244b-49ab-8700-5471fa19d0c5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Oct 5 06:13:26 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/61f54faa-3263-499e-b9fe-d27f219c60f3/.meta.tmp' Oct 5 06:13:26 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/61f54faa-3263-499e-b9fe-d27f219c60f3/.meta.tmp' to config b'/volumes/_nogroup/61f54faa-3263-499e-b9fe-d27f219c60f3/.meta' Oct 5 06:13:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61f54faa-3263-499e-b9fe-d27f219c60f3", "format": "json"}]: dispatch Oct 5 06:13:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.147 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing aggregate associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.187 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Refreshing trait associations for resource provider 36221146-244b-49ab-8700-5471fa19d0c5, traits: COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,HW_CPU_X86_FMA3,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,HW_CPU_X86_ABM,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.211 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:13:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:13:26 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1376968945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.656 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.662 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.685 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.688 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.688 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:13:26 localhost nova_compute[297130]: 2025-10-05 10:13:26.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "2976bf0c-3a29-46fc-b779-4791c9eac923", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:2976bf0c-3a29-46fc-b779-4791c9eac923, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:2976bf0c-3a29-46fc-b779-4791c9eac923, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:27 localhost nova_compute[297130]: 2025-10-05 10:13:27.684 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Oct 5 06:13:27 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Oct 5 06:13:27 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 210 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 166 KiB/s rd, 95 MiB/s wr, 312 op/s Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice", "format": "json"}]: dispatch Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:27 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e247 e247: 6 total, 6 up, 6 in Oct 5 06:13:28 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Oct 5 06:13:28 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:28 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Oct 5 06:13:28 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Oct 5 06:13:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:29 localhost nova_compute[297130]: 2025-10-05 10:13:29.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:29 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "61f54faa-3263-499e-b9fe-d27f219c60f3", "new_size": 2147483648, "format": "json"}]: dispatch Oct 5 06:13:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e248 e248: 6 total, 6 up, 6 in Oct 5 06:13:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 210 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 143 KiB/s rd, 80 MiB/s wr, 277 op/s Oct 5 06:13:30 localhost nova_compute[297130]: 2025-10-05 10:13:30.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:13:30 localhost nova_compute[297130]: 2025-10-05 10:13:30.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:13:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "2976bf0c-3a29-46fc-b779-4791c9eac923", "force": true, "format": "json"}]: dispatch Oct 5 06:13:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:2976bf0c-3a29-46fc-b779-4791c9eac923, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:2976bf0c-3a29-46fc-b779-4791c9eac923, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:30 localhost nova_compute[297130]: 2025-10-05 10:13:30.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:13:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:13:30 localhost systemd[1]: tmp-crun.1M8rXe.mount: Deactivated successfully. Oct 5 06:13:30 localhost podman[338551]: 2025-10-05 10:13:30.924342171 +0000 UTC m=+0.086221539 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:13:30 localhost podman[338551]: 2025-10-05 10:13:30.935221605 +0000 UTC m=+0.097101003 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:13:30 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:13:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:13:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:13:30 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:30 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:31 localhost podman[338550]: 2025-10-05 10:13:31.032276507 +0000 UTC m=+0.198442542 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:13:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:31 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:31 localhost podman[338550]: 2025-10-05 10:13:31.071161011 +0000 UTC m=+0.237327086 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:13:31 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:13:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:31 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e249 e249: 6 total, 6 up, 6 in Oct 5 06:13:31 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:31 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:31 localhost nova_compute[297130]: 2025-10-05 10:13:31.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 210 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 28 MiB/s wr, 189 op/s Oct 5 06:13:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61f54faa-3263-499e-b9fe-d27f219c60f3", "format": "json"}]: dispatch Oct 5 06:13:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:61f54faa-3263-499e-b9fe-d27f219c60f3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:61f54faa-3263-499e-b9fe-d27f219c60f3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:32 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:32.619+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61f54faa-3263-499e-b9fe-d27f219c60f3' of type subvolume Oct 5 06:13:32 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '61f54faa-3263-499e-b9fe-d27f219c60f3' of type subvolume Oct 5 06:13:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61f54faa-3263-499e-b9fe-d27f219c60f3", "force": true, "format": "json"}]: dispatch Oct 5 06:13:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/61f54faa-3263-499e-b9fe-d27f219c60f3'' moved to trashcan Oct 5 06:13:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:61f54faa-3263-499e-b9fe-d27f219c60f3, vol_name:cephfs) < "" Oct 5 06:13:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 89 KiB/s wr, 41 op/s Oct 5 06:13:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "45dee2ac-fbcf-4d37-8b69-c648b297ba12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, vol_name:cephfs) < "" Oct 5 06:13:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45dee2ac-fbcf-4d37-8b69-c648b297ba12/.meta.tmp' Oct 5 06:13:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45dee2ac-fbcf-4d37-8b69-c648b297ba12/.meta.tmp' to config b'/volumes/_nogroup/45dee2ac-fbcf-4d37-8b69-c648b297ba12/.meta' Oct 5 06:13:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, vol_name:cephfs) < "" Oct 5 06:13:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "45dee2ac-fbcf-4d37-8b69-c648b297ba12", "format": "json"}]: dispatch Oct 5 06:13:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, vol_name:cephfs) < "" Oct 5 06:13:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, vol_name:cephfs) < "" Oct 5 06:13:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:13:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:13:34 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:13:34 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:13:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:34 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:34 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:34 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e250 e250: 6 total, 6 up, 6 in Oct 5 06:13:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:13:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:13:34 localhost podman[338592]: 2025-10-05 10:13:34.885784318 +0000 UTC m=+0.054213431 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2) Oct 5 06:13:34 localhost podman[338592]: 2025-10-05 10:13:34.925114694 +0000 UTC m=+0.093543767 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:13:34 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:13:34 localhost podman[338593]: 2025-10-05 10:13:34.970731721 +0000 UTC m=+0.131915538 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:13:35 localhost podman[338593]: 2025-10-05 10:13:35.03267723 +0000 UTC m=+0.193861047 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:13:35 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:13:35 localhost nova_compute[297130]: 2025-10-05 10:13:35.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 89 KiB/s wr, 41 op/s Oct 5 06:13:36 localhost nova_compute[297130]: 2025-10-05 10:13:36.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e251 e251: 6 total, 6 up, 6 in Oct 5 06:13:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "45dee2ac-fbcf-4d37-8b69-c648b297ba12", "format": "json"}]: dispatch Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:37.106+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '45dee2ac-fbcf-4d37-8b69-c648b297ba12' of type subvolume Oct 5 06:13:37 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '45dee2ac-fbcf-4d37-8b69-c648b297ba12' of type subvolume Oct 5 06:13:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "45dee2ac-fbcf-4d37-8b69-c648b297ba12", "force": true, "format": "json"}]: dispatch Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/45dee2ac-fbcf-4d37-8b69-c648b297ba12'' moved to trashcan Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:45dee2ac-fbcf-4d37-8b69-c648b297ba12, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:13:37 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:37 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice_bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:37 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:37 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:13:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 186 KiB/s wr, 134 op/s Oct 5 06:13:38 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:38 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:38 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:38 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e252 e252: 6 total, 6 up, 6 in Oct 5 06:13:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4bb40a33-3ed5-4705-af25-b85668dae0d3", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4bb40a33-3ed5-4705-af25-b85668dae0d3/.meta.tmp' Oct 5 06:13:38 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4bb40a33-3ed5-4705-af25-b85668dae0d3/.meta.tmp' to config b'/volumes/_nogroup/4bb40a33-3ed5-4705-af25-b85668dae0d3/.meta' Oct 5 06:13:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:38 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4bb40a33-3ed5-4705-af25-b85668dae0d3", "format": "json"}]: dispatch Oct 5 06:13:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:38 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:13:38 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:13:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:13:38 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:13:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:13:38 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 05b0d092-2223-4a52-a02f-72d8e54fbdd2 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:13:38 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 05b0d092-2223-4a52-a02f-72d8e54fbdd2 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:13:38 localhost ceph-mgr[301363]: [progress INFO root] Completed event 05b0d092-2223-4a52-a02f-72d8e54fbdd2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:13:38 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:13:38 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:13:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:39 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:13:39 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:13:39 localhost ovn_controller[157556]: 2025-10-05T10:13:39Z|00201|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory Oct 5 06:13:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 111 KiB/s wr, 103 op/s Oct 5 06:13:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:13:40 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4006302127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:13:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:13:40 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4006302127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:13:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf417acc-04e5-4a14-a7b6-349952062211", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf417acc-04e5-4a14-a7b6-349952062211, vol_name:cephfs) < "" Oct 5 06:13:40 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf417acc-04e5-4a14-a7b6-349952062211/.meta.tmp' Oct 5 06:13:40 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf417acc-04e5-4a14-a7b6-349952062211/.meta.tmp' to config b'/volumes/_nogroup/bf417acc-04e5-4a14-a7b6-349952062211/.meta' Oct 5 06:13:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf417acc-04e5-4a14-a7b6-349952062211, vol_name:cephfs) < "" Oct 5 06:13:40 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf417acc-04e5-4a14-a7b6-349952062211", "format": "json"}]: dispatch Oct 5 06:13:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf417acc-04e5-4a14-a7b6-349952062211, vol_name:cephfs) < "" Oct 5 06:13:40 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf417acc-04e5-4a14-a7b6-349952062211, vol_name:cephfs) < "" Oct 5 06:13:40 localhost nova_compute[297130]: 2025-10-05 10:13:40.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab543715-bba5-44a4-a4e3-fff5dbc0630b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "format": "json"}]: dispatch Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/af643f0a-f868-4367-9627-74ff7447e2ed/ab543715-bba5-44a4-a4e3-fff5dbc0630b/.meta.tmp' Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/af643f0a-f868-4367-9627-74ff7447e2ed/ab543715-bba5-44a4-a4e3-fff5dbc0630b/.meta.tmp' to config b'/volumes/af643f0a-f868-4367-9627-74ff7447e2ed/ab543715-bba5-44a4-a4e3-fff5dbc0630b/.meta' Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab543715-bba5-44a4-a4e3-fff5dbc0630b", "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "format": "json"}]: dispatch Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolume getpath, sub_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolume getpath, sub_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Oct 5 06:13:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Oct 5 06:13:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice_bob", "format": "json"}]: dispatch Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:41 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:13:41 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4bb40a33-3ed5-4705-af25-b85668dae0d3", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 90 KiB/s wr, 83 op/s Oct 5 06:13:41 localhost nova_compute[297130]: 2025-10-05 10:13:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:41 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:41 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:13:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:13:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Oct 5 06:13:42 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Oct 5 06:13:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Oct 5 06:13:42 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:13:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e253 e253: 6 total, 6 up, 6 in Oct 5 06:13:43 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf417acc-04e5-4a14-a7b6-349952062211", "format": "json"}]: dispatch Oct 5 06:13:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf417acc-04e5-4a14-a7b6-349952062211, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf417acc-04e5-4a14-a7b6-349952062211, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:43 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:43.788+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf417acc-04e5-4a14-a7b6-349952062211' of type subvolume Oct 5 06:13:43 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf417acc-04e5-4a14-a7b6-349952062211' of type subvolume Oct 5 06:13:43 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf417acc-04e5-4a14-a7b6-349952062211", "force": true, "format": "json"}]: dispatch Oct 5 06:13:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf417acc-04e5-4a14-a7b6-349952062211, vol_name:cephfs) < "" Oct 5 06:13:43 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf417acc-04e5-4a14-a7b6-349952062211'' moved to trashcan Oct 5 06:13:43 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf417acc-04e5-4a14-a7b6-349952062211, vol_name:cephfs) < "" Oct 5 06:13:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 170 KiB/s wr, 138 op/s Oct 5 06:13:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:44 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:13:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:44 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:44 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:44 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:44 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4bb40a33-3ed5-4705-af25-b85668dae0d3", "format": "json"}]: dispatch Oct 5 06:13:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:45 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:45.135+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4bb40a33-3ed5-4705-af25-b85668dae0d3' of type subvolume Oct 5 06:13:45 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4bb40a33-3ed5-4705-af25-b85668dae0d3' of type subvolume Oct 5 06:13:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4bb40a33-3ed5-4705-af25-b85668dae0d3", "force": true, "format": "json"}]: dispatch Oct 5 06:13:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4bb40a33-3ed5-4705-af25-b85668dae0d3'' moved to trashcan Oct 5 06:13:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4bb40a33-3ed5-4705-af25-b85668dae0d3, vol_name:cephfs) < "" Oct 5 06:13:45 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:45 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:45 localhost nova_compute[297130]: 2025-10-05 10:13:45.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:13:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:13:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 62 KiB/s wr, 40 op/s Oct 5 06:13:45 localhost podman[338723]: 2025-10-05 10:13:45.928870437 +0000 UTC m=+0.090601578 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Oct 5 06:13:45 localhost systemd[1]: tmp-crun.zXttCl.mount: Deactivated successfully. Oct 5 06:13:45 localhost podman[338724]: 2025-10-05 10:13:45.980594279 +0000 UTC m=+0.140420579 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:13:45 localhost podman[338723]: 2025-10-05 10:13:45.994390113 +0000 UTC m=+0.156121254 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:13:46 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:13:46 localhost podman[338724]: 2025-10-05 10:13:46.016201294 +0000 UTC m=+0.176027554 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:13:46 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:13:46 localhost openstack_network_exporter[250246]: ERROR 10:13:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:46 localhost openstack_network_exporter[250246]: ERROR 10:13:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:13:46 localhost openstack_network_exporter[250246]: ERROR 10:13:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:13:46 localhost openstack_network_exporter[250246]: ERROR 10:13:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:13:46 localhost openstack_network_exporter[250246]: Oct 5 06:13:46 localhost openstack_network_exporter[250246]: ERROR 10:13:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:13:46 localhost openstack_network_exporter[250246]: Oct 5 06:13:46 localhost nova_compute[297130]: 2025-10-05 10:13:46.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e254 e254: 6 total, 6 up, 6 in Oct 5 06:13:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:13:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:47 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:47 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 121 KiB/s wr, 106 op/s Oct 5 06:13:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:47 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:47 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:47 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:47 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:47 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:47 localhost podman[338766]: 2025-10-05 10:13:47.930368282 +0000 UTC m=+0.086055114 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:13:47 localhost podman[338766]: 2025-10-05 10:13:47.974197891 +0000 UTC m=+0.129884733 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Oct 5 06:13:47 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:13:48 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e255 e255: 6 total, 6 up, 6 in Oct 5 06:13:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cee63e0-4860-441e-a83b-c9b2aeee1050", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, vol_name:cephfs) < "" Oct 5 06:13:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cee63e0-4860-441e-a83b-c9b2aeee1050/.meta.tmp' Oct 5 06:13:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cee63e0-4860-441e-a83b-c9b2aeee1050/.meta.tmp' to config b'/volumes/_nogroup/9cee63e0-4860-441e-a83b-c9b2aeee1050/.meta' Oct 5 06:13:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, vol_name:cephfs) < "" Oct 5 06:13:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cee63e0-4860-441e-a83b-c9b2aeee1050", "format": "json"}]: dispatch Oct 5 06:13:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, vol_name:cephfs) < "" Oct 5 06:13:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, vol_name:cephfs) < "" Oct 5 06:13:48 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:48 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:48 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:48 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:49 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "snap_name": "9e2ee4c9-2a5b-4341-9e8e-421ba8a38b59", "force": true, "format": "json"}]: dispatch Oct 5 06:13:49 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolumegroup snapshot rm, snap_name:9e2ee4c9-2a5b-4341-9e8e-421ba8a38b59, vol_name:cephfs) < "" Oct 5 06:13:49 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolumegroup snapshot rm, snap_name:9e2ee4c9-2a5b-4341-9e8e-421ba8a38b59, vol_name:cephfs) < "" Oct 5 06:13:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:13:49.527 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:13:49 localhost nova_compute[297130]: 2025-10-05 10:13:49.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:49 localhost ovn_metadata_agent[163196]: 2025-10-05 10:13:49.529 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:13:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 73 KiB/s wr, 81 op/s Oct 5 06:13:50 localhost nova_compute[297130]: 2025-10-05 10:13:50.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "r", "format": "json"}]: dispatch Oct 5 06:13:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID alice bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:51 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:51 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:51 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:51 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow r pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 59 KiB/s wr, 66 op/s Oct 5 06:13:51 localhost nova_compute[297130]: 2025-10-05 10:13:51.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab543715-bba5-44a4-a4e3-fff5dbc0630b", "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "format": "json"}]: dispatch Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:52.739+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab543715-bba5-44a4-a4e3-fff5dbc0630b' of type subvolume Oct 5 06:13:52 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab543715-bba5-44a4-a4e3-fff5dbc0630b' of type subvolume Oct 5 06:13:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab543715-bba5-44a4-a4e3-fff5dbc0630b", "force": true, "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "format": "json"}]: dispatch Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolume rm, sub_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, vol_name:cephfs) < "" Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/af643f0a-f868-4367-9627-74ff7447e2ed/ab543715-bba5-44a4-a4e3-fff5dbc0630b'' moved to trashcan Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolume rm, sub_name:ab543715-bba5-44a4-a4e3-fff5dbc0630b, vol_name:cephfs) < "" Oct 5 06:13:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:13:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cee63e0-4860-441e-a83b-c9b2aeee1050", "format": "json"}]: dispatch Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:52 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:13:52.899+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cee63e0-4860-441e-a83b-c9b2aeee1050' of type subvolume Oct 5 06:13:52 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cee63e0-4860-441e-a83b-c9b2aeee1050' of type subvolume Oct 5 06:13:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cee63e0-4860-441e-a83b-c9b2aeee1050", "force": true, "format": "json"}]: dispatch Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, vol_name:cephfs) < "" Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9cee63e0-4860-441e-a83b-c9b2aeee1050'' moved to trashcan Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cee63e0-4860-441e-a83b-c9b2aeee1050, vol_name:cephfs) < "" Oct 5 06:13:52 localhost podman[338787]: 2025-10-05 10:13:52.920753364 +0000 UTC m=+0.087277697 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:13:52 localhost podman[338787]: 2025-10-05 10:13:52.925112593 +0000 UTC m=+0.091636956 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:13:52 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:13:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 128 KiB/s wr, 114 op/s Oct 5 06:13:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:54 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:13:54 localhost podman[338823]: 2025-10-05 10:13:54.504288068 +0000 UTC m=+0.059561595 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:13:54 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:13:54 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:13:54 localhost ovn_metadata_agent[163196]: 2025-10-05 10:13:54.532 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:13:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:54 localhost nova_compute[297130]: 2025-10-05 10:13:54.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Oct 5 06:13:54 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Oct 5 06:13:54 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "alice bob", "format": "json"}]: dispatch Oct 5 06:13:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:54 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:13:54 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:13:54 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:13:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:13:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Oct 5 06:13:55 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Oct 5 06:13:55 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Oct 5 06:13:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "af643f0a-f868-4367-9627-74ff7447e2ed", "force": true, "format": "json"}]: dispatch Oct 5 06:13:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:af643f0a-f868-4367-9627-74ff7447e2ed, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:13:55 localhost nova_compute[297130]: 2025-10-05 10:13:55.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 116 KiB/s wr, 104 op/s Oct 5 06:13:56 localhost podman[248157]: time="2025-10-05T10:13:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:13:56 localhost podman[248157]: @ - - [05/Oct/2025:10:13:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:13:56 localhost podman[248157]: @ - - [05/Oct/2025:10:13:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19382 "" "Go-http-client/1.1" Oct 5 06:13:56 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3216fc1-46ec-449b-978a-26f9fe2dc899", "format": "json"}]: dispatch Oct 5 06:13:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:13:56 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3216fc1-46ec-449b-978a-26f9fe2dc899", "force": true, "format": "json"}]: dispatch Oct 5 06:13:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, vol_name:cephfs) < "" Oct 5 06:13:56 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b3216fc1-46ec-449b-978a-26f9fe2dc899'' moved to trashcan Oct 5 06:13:56 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:13:56 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3216fc1-46ec-449b-978a-26f9fe2dc899, vol_name:cephfs) < "" Oct 5 06:13:56 localhost nova_compute[297130]: 2025-10-05 10:13:56.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:13:57 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e256 e256: 6 total, 6 up, 6 in Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.059321) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237059390, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1057, "num_deletes": 254, "total_data_size": 1153149, "memory_usage": 1173504, "flush_reason": "Manual Compaction"} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237066935, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 651857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31456, "largest_seqno": 32508, "table_properties": {"data_size": 647390, "index_size": 1938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12571, "raw_average_key_size": 22, "raw_value_size": 637531, "raw_average_value_size": 1120, "num_data_blocks": 84, "num_entries": 569, "num_filter_entries": 569, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659202, "oldest_key_time": 1759659202, "file_creation_time": 1759659237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7655 microseconds, and 3260 cpu microseconds. Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.066981) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 651857 bytes OK Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.067004) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.069087) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.069108) EVENT_LOG_v1 {"time_micros": 1759659237069101, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.069130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1147548, prev total WAL file size 1147548, number of live WAL files 2. Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.069862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323536' seq:72057594037927935, type:22 .. '6D6772737461740034353037' seq:0, type:0; will stop at (end) Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(636KB)], [48(17MB)] Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237069916, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 18687472, "oldest_snapshot_seqno": -1} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 14385 keys, 16670658 bytes, temperature: kUnknown Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237176475, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 16670658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16590258, "index_size": 43371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 386073, "raw_average_key_size": 26, "raw_value_size": 16347696, "raw_average_value_size": 1136, "num_data_blocks": 1600, "num_entries": 14385, "num_filter_entries": 14385, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759659237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.176848) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 16670658 bytes Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.180129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.2 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 17.2 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(54.2) write-amplify(25.6) OK, records in: 14902, records dropped: 517 output_compression: NoCompression Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.180159) EVENT_LOG_v1 {"time_micros": 1759659237180146, "job": 28, "event": "compaction_finished", "compaction_time_micros": 106679, "compaction_time_cpu_micros": 50524, "output_level": 6, "num_output_files": 1, "total_output_size": 16670658, "num_input_records": 14902, "num_output_records": 14385, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237180402, "job": 28, "event": "table_file_deletion", "file_number": 50} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659237182715, "job": 28, "event": "table_file_deletion", "file_number": 48} Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.069665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.182813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.182820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.182824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.182827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:13:57.182830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:13:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 104 KiB/s wr, 44 op/s Oct 5 06:13:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:13:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:57 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 5 06:13:57 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:13:57 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: Creating meta for ID bob with tenant a9b852a8688645e9918c5ecfd16d601d Oct 5 06:13:58 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} v 0) Oct 5 06:13:58 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:13:58 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"} : dispatch Oct 5 06:13:58 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c", "mon", "allow r"], "format": "json"}]': finished Oct 5 06:13:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:13:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta.tmp' Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta.tmp' to config b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta' Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "format": "json"}]: dispatch Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "snap_name": "ebdeb2fe-7721-4e6e-9c08-db570078c646_a7bb20c9-03b8-4fca-be98-3366eab61cc9", "force": true, "format": "json"}]: dispatch Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646_a7bb20c9-03b8-4fca-be98-3366eab61cc9, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta' Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646_a7bb20c9-03b8-4fca-be98-3366eab61cc9, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "snap_name": "ebdeb2fe-7721-4e6e-9c08-db570078c646", "force": true, "format": "json"}]: dispatch Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta.tmp' to config b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6/.meta' Oct 5 06:13:59 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ebdeb2fe-7721-4e6e-9c08-db570078c646, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:13:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 102 KiB/s wr, 43 op/s Oct 5 06:14:00 localhost nova_compute[297130]: 2025-10-05 10:14:00.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:14:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:14:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 102 KiB/s wr, 43 op/s Oct 5 06:14:01 localhost systemd[1]: tmp-crun.orppN0.mount: Deactivated successfully. Oct 5 06:14:01 localhost podman[338844]: 2025-10-05 10:14:01.923242555 +0000 UTC m=+0.093716851 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd) Oct 5 06:14:01 localhost podman[338845]: 2025-10-05 10:14:01.960622989 +0000 UTC m=+0.129184414 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:14:01 localhost podman[338844]: 2025-10-05 10:14:01.989453961 +0000 UTC m=+0.159928237 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:14:01 localhost nova_compute[297130]: 2025-10-05 10:14:01.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:01 localhost podman[338845]: 2025-10-05 10:14:01.999630596 +0000 UTC m=+0.168192061 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:14:02 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:14:02 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:14:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "snap_name": "36ec1af4-fb27-4bc9-9600-370d3671c884", "format": "json"}]: dispatch Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:36ec1af4-fb27-4bc9-9600-370d3671c884, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:36ec1af4-fb27-4bc9-9600-370d3671c884, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "format": "json"}]: dispatch Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:be2e4c48-ff64-403c-93e3-17bddee99de6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:be2e4c48-ff64-403c-93e3-17bddee99de6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'be2e4c48-ff64-403c-93e3-17bddee99de6' of type subvolume Oct 5 06:14:02 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:02.601+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'be2e4c48-ff64-403c-93e3-17bddee99de6' of type subvolume Oct 5 06:14:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "be2e4c48-ff64-403c-93e3-17bddee99de6", "force": true, "format": "json"}]: dispatch Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/be2e4c48-ff64-403c-93e3-17bddee99de6'' moved to trashcan Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:be2e4c48-ff64-403c-93e3-17bddee99de6, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/.meta.tmp' Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/.meta.tmp' to config b'/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/.meta' Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "format": "json"}]: dispatch Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 99 KiB/s wr, 10 op/s Oct 5 06:14:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e257 e257: 6 total, 6 up, 6 in Oct 5 06:14:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "snap_name": "36ec1af4-fb27-4bc9-9600-370d3671c884_2f385b9a-30c9-4d39-8b5a-aad6e0370d9d", "force": true, "format": "json"}]: dispatch Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ec1af4-fb27-4bc9-9600-370d3671c884_2f385b9a-30c9-4d39-8b5a-aad6e0370d9d, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta.tmp' Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta.tmp' to config b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta' Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ec1af4-fb27-4bc9-9600-370d3671c884_2f385b9a-30c9-4d39-8b5a-aad6e0370d9d, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "snap_name": "36ec1af4-fb27-4bc9-9600-370d3671c884", "force": true, "format": "json"}]: dispatch Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ec1af4-fb27-4bc9-9600-370d3671c884, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta.tmp' Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta.tmp' to config b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4/.meta' Oct 5 06:14:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:36ec1af4-fb27-4bc9-9600-370d3671c884, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:05 localhost nova_compute[297130]: 2025-10-05 10:14:05.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:14:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:14:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.0 KiB/s rd, 112 KiB/s wr, 11 op/s Oct 5 06:14:05 localhost podman[338886]: 2025-10-05 10:14:05.951124624 +0000 UTC m=+0.082699004 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3) Oct 5 06:14:05 localhost podman[338886]: 2025-10-05 10:14:05.991060676 +0000 UTC m=+0.122635076 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 06:14:06 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:14:06 localhost podman[338887]: 2025-10-05 10:14:06.01183491 +0000 UTC m=+0.138940248 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, container_name=ovn_controller, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Oct 5 06:14:06 localhost podman[338887]: 2025-10-05 10:14:06.07493719 +0000 UTC m=+0.202042498 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:06 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:14:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "auth_id": "bob", "tenant_id": "a9b852a8688645e9918c5ecfd16d601d", "access_level": "rw", "format": "json"}]: dispatch Oct 5 06:14:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:14:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 5 06:14:06 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} v 0) Oct 5 06:14:06 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} : dispatch Oct 5 06:14:06 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 5 06:14:06 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, tenant_id:a9b852a8688645e9918c5ecfd16d601d, vol_name:cephfs) < "" Oct 5 06:14:06 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:06 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} : dispatch Oct 5 06:14:06 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]} : dispatch Oct 5 06:14:06 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72,allow rw path=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c,allow rw pool=manila_data namespace=fsvolumens_ea80201b-ce51-4972-8c9e-95c0a29ba758"]}]': finished Oct 5 06:14:06 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:07 localhost nova_compute[297130]: 2025-10-05 10:14:07.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd879934-a84d-43e2-984d-2893bc73f198", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta.tmp' Oct 5 06:14:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta.tmp' to config b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta' Oct 5 06:14:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd879934-a84d-43e2-984d-2893bc73f198", "format": "json"}]: dispatch Oct 5 06:14:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s Oct 5 06:14:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "format": "json"}]: dispatch Oct 5 06:14:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:08 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:08.967+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e3680e59-cc65-4c1f-b638-78d1c8b680b4' of type subvolume Oct 5 06:14:08 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e3680e59-cc65-4c1f-b638-78d1c8b680b4' of type subvolume Oct 5 06:14:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e3680e59-cc65-4c1f-b638-78d1c8b680b4", "force": true, "format": "json"}]: dispatch Oct 5 06:14:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e3680e59-cc65-4c1f-b638-78d1c8b680b4'' moved to trashcan Oct 5 06:14:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e3680e59-cc65-4c1f-b638-78d1c8b680b4, vol_name:cephfs) < "" Oct 5 06:14:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e258 e258: 6 total, 6 up, 6 in Oct 5 06:14:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "auth_id": "bob", "format": "json"}]: dispatch Oct 5 06:14:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 5 06:14:09 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} v 0) Oct 5 06:14:09 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} : dispatch Oct 5 06:14:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "auth_id": "bob", "format": "json"}]: dispatch Oct 5 06:14:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855 Oct 5 06:14:09 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758/733f38c1-eed7-4597-a1b6-bab97daa5855],prefix=session evict} (starting...) Oct 5 06:14:09 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:14:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 133 KiB/s wr, 14 op/s Oct 5 06:14:10 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:10 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} : dispatch Oct 5 06:14:10 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]} : dispatch Oct 5 06:14:10 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72", "osd", "allow rw pool=manila_data namespace=fsvolumens_5c152103-e1c7-44cb-9a71-b5439bf3485c"]}]': finished Oct 5 06:14:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bd879934-a84d-43e2-984d-2893bc73f198", "snap_name": "42fd7562-11a4-4198-bbca-2b4ad7801fe7", "format": "json"}]: dispatch Oct 5 06:14:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:42fd7562-11a4-4198-bbca-2b4ad7801fe7, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:42fd7562-11a4-4198-bbca-2b4ad7801fe7, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:10 localhost nova_compute[297130]: 2025-10-05 10:14:10.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:14:11 Oct 5 06:14:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:14:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:14:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['backups', 'images', 'manila_data', 'vms', '.mgr', 'manila_metadata', 'volumes'] Oct 5 06:14:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:14:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:14:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:14:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:14:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:14:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:14:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:14:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 68 KiB/s wr, 6 op/s Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.453674623115578e-06 of space, bias 1.0, pg target 0.00048828125 quantized to 32 (current 32) Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:14:11 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0015455423820491345 of space, bias 4.0, pg target 1.230251736111111 quantized to 16 (current 16) Oct 5 06:14:11 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:14:11 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:14:11 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:14:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:14:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e259 e259: 6 total, 6 up, 6 in Oct 5 06:14:12 localhost nova_compute[297130]: 2025-10-05 10:14:12.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f84cb9de-fc83-48fa-92b5-d3708de87696", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f84cb9de-fc83-48fa-92b5-d3708de87696, vol_name:cephfs) < "" Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f84cb9de-fc83-48fa-92b5-d3708de87696/.meta.tmp' Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f84cb9de-fc83-48fa-92b5-d3708de87696/.meta.tmp' to config b'/volumes/_nogroup/f84cb9de-fc83-48fa-92b5-d3708de87696/.meta' Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f84cb9de-fc83-48fa-92b5-d3708de87696, vol_name:cephfs) < "" Oct 5 06:14:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f84cb9de-fc83-48fa-92b5-d3708de87696", "format": "json"}]: dispatch Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f84cb9de-fc83-48fa-92b5-d3708de87696, vol_name:cephfs) < "" Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f84cb9de-fc83-48fa-92b5-d3708de87696, vol_name:cephfs) < "" Oct 5 06:14:12 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "bob", "format": "json"}]: dispatch Oct 5 06:14:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:14:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Oct 5 06:14:12 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Oct 5 06:14:12 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 5 06:14:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:14:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "auth_id": "bob", "format": "json"}]: dispatch Oct 5 06:14:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:14:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72 Oct 5 06:14:13 localhost ceph-mds[300011]: mds.mds.np0005471152.pozuqw asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c/cb4eadde-4727-46da-a199-176718d4dd72],prefix=session evict} (starting...) Oct 5 06:14:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Oct 5 06:14:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:14:13 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Oct 5 06:14:13 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 5 06:14:13 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Oct 5 06:14:13 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Oct 5 06:14:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 131 KiB/s wr, 12 op/s Oct 5 06:14:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5d91b22f-c30d-46e5-822a-03f4ce5bb7a7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, vol_name:cephfs) < "" Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5d91b22f-c30d-46e5-822a-03f4ce5bb7a7/.meta.tmp' Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5d91b22f-c30d-46e5-822a-03f4ce5bb7a7/.meta.tmp' to config b'/volumes/_nogroup/5d91b22f-c30d-46e5-822a-03f4ce5bb7a7/.meta' Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, vol_name:cephfs) < "" Oct 5 06:14:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5d91b22f-c30d-46e5-822a-03f4ce5bb7a7", "format": "json"}]: dispatch Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, vol_name:cephfs) < "" Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, vol_name:cephfs) < "" Oct 5 06:14:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:14:15.313 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:14:15Z, description=, device_id=4120182b-df1e-4372-8a7a-1d1453c14eb9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a0445a32-42e6-4f0e-98a6-b579d4ebe887, ip_allocation=immediate, mac_address=fa:16:3e:e2:25:52, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3739, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:14:15Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:14:15 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:14:15 localhost podman[338949]: 2025-10-05 10:14:15.577213284 +0000 UTC m=+0.057308635 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:14:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:14:15 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:14:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f84cb9de-fc83-48fa-92b5-d3708de87696", "format": "json"}]: dispatch Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f84cb9de-fc83-48fa-92b5-d3708de87696, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f84cb9de-fc83-48fa-92b5-d3708de87696, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:15 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:15.623+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f84cb9de-fc83-48fa-92b5-d3708de87696' of type subvolume Oct 5 06:14:15 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f84cb9de-fc83-48fa-92b5-d3708de87696' of type subvolume Oct 5 06:14:15 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f84cb9de-fc83-48fa-92b5-d3708de87696", "force": true, "format": "json"}]: dispatch Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f84cb9de-fc83-48fa-92b5-d3708de87696, vol_name:cephfs) < "" Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f84cb9de-fc83-48fa-92b5-d3708de87696'' moved to trashcan Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:15 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f84cb9de-fc83-48fa-92b5-d3708de87696, vol_name:cephfs) < "" Oct 5 06:14:15 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:14:15.802 271653 INFO neutron.agent.dhcp.agent [None req-75702ac7-36ce-4db3-855c-98ff4c5839cc - - - - - -] DHCP configuration for ports {'a0445a32-42e6-4f0e-98a6-b579d4ebe887'} is completed#033[00m Oct 5 06:14:15 localhost nova_compute[297130]: 2025-10-05 10:14:15.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 63 KiB/s wr, 6 op/s Oct 5 06:14:16 localhost nova_compute[297130]: 2025-10-05 10:14:16.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:16 localhost openstack_network_exporter[250246]: ERROR 10:14:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:14:16 localhost openstack_network_exporter[250246]: ERROR 10:14:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:16 localhost openstack_network_exporter[250246]: ERROR 10:14:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:16 localhost openstack_network_exporter[250246]: ERROR 10:14:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:14:16 localhost openstack_network_exporter[250246]: Oct 5 06:14:16 localhost openstack_network_exporter[250246]: ERROR 10:14:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:14:16 localhost openstack_network_exporter[250246]: Oct 5 06:14:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:14:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:14:16 localhost podman[338969]: 2025-10-05 10:14:16.916017701 +0000 UTC m=+0.082070475 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001) Oct 5 06:14:16 localhost podman[338969]: 2025-10-05 10:14:16.926222149 +0000 UTC m=+0.092274923 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:16 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:14:17 localhost podman[338970]: 2025-10-05 10:14:17.017910294 +0000 UTC m=+0.181681156 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:14:17 localhost podman[338970]: 2025-10-05 10:14:17.031187964 +0000 UTC m=+0.194958846 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:14:17 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:14:17 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 e260: 6 total, 6 up, 6 in Oct 5 06:14:17 localhost nova_compute[297130]: 2025-10-05 10:14:17.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "format": "json"}]: dispatch Oct 5 06:14:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:17.085+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea80201b-ce51-4972-8c9e-95c0a29ba758' of type subvolume Oct 5 06:14:17 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea80201b-ce51-4972-8c9e-95c0a29ba758' of type subvolume Oct 5 06:14:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea80201b-ce51-4972-8c9e-95c0a29ba758", "force": true, "format": "json"}]: dispatch Oct 5 06:14:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea80201b-ce51-4972-8c9e-95c0a29ba758'' moved to trashcan Oct 5 06:14:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea80201b-ce51-4972-8c9e-95c0a29ba758, vol_name:cephfs) < "" Oct 5 06:14:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 136 KiB/s wr, 11 op/s Oct 5 06:14:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5d91b22f-c30d-46e5-822a-03f4ce5bb7a7", "format": "json"}]: dispatch Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:18 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:18.415+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5d91b22f-c30d-46e5-822a-03f4ce5bb7a7' of type subvolume Oct 5 06:14:18 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5d91b22f-c30d-46e5-822a-03f4ce5bb7a7' of type subvolume Oct 5 06:14:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5d91b22f-c30d-46e5-822a-03f4ce5bb7a7", "force": true, "format": "json"}]: dispatch Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, vol_name:cephfs) < "" Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5d91b22f-c30d-46e5-822a-03f4ce5bb7a7'' moved to trashcan Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5d91b22f-c30d-46e5-822a-03f4ce5bb7a7, vol_name:cephfs) < "" Oct 5 06:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:14:18 localhost podman[339011]: 2025-10-05 10:14:18.907958069 +0000 UTC m=+0.076479044 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.) Oct 5 06:14:18 localhost podman[339011]: 2025-10-05 10:14:18.923250075 +0000 UTC m=+0.091771090 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Oct 5 06:14:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:14:18 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta' Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:14:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "format": "json"}]: dispatch Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:14:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:14:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 136 KiB/s wr, 11 op/s Oct 5 06:14:20 localhost nova_compute[297130]: 2025-10-05 10:14:20.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:20 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "format": "json"}]: dispatch Oct 5 06:14:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:20 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:20.361+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5c152103-e1c7-44cb-9a71-b5439bf3485c' of type subvolume Oct 5 06:14:20 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5c152103-e1c7-44cb-9a71-b5439bf3485c' of type subvolume Oct 5 06:14:20 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5c152103-e1c7-44cb-9a71-b5439bf3485c", "force": true, "format": "json"}]: dispatch Oct 5 06:14:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:14:20 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5c152103-e1c7-44cb-9a71-b5439bf3485c'' moved to trashcan Oct 5 06:14:20 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:20 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5c152103-e1c7-44cb-9a71-b5439bf3485c, vol_name:cephfs) < "" Oct 5 06:14:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:14:20.409 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:14:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:14:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:14:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:14:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:14:20 localhost nova_compute[297130]: 2025-10-05 10:14:20.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 521 B/s rd, 111 KiB/s wr, 9 op/s Oct 5 06:14:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "709609b0-dfb7-448a-b82a-bf98fc525c86", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:709609b0-dfb7-448a-b82a-bf98fc525c86, vol_name:cephfs) < "" Oct 5 06:14:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/709609b0-dfb7-448a-b82a-bf98fc525c86/.meta.tmp' Oct 5 06:14:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/709609b0-dfb7-448a-b82a-bf98fc525c86/.meta.tmp' to config b'/volumes/_nogroup/709609b0-dfb7-448a-b82a-bf98fc525c86/.meta' Oct 5 06:14:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:709609b0-dfb7-448a-b82a-bf98fc525c86, vol_name:cephfs) < "" Oct 5 06:14:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "709609b0-dfb7-448a-b82a-bf98fc525c86", "format": "json"}]: dispatch Oct 5 06:14:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:709609b0-dfb7-448a-b82a-bf98fc525c86, vol_name:cephfs) < "" Oct 5 06:14:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:709609b0-dfb7-448a-b82a-bf98fc525c86, vol_name:cephfs) < "" Oct 5 06:14:22 localhost nova_compute[297130]: 2025-10-05 10:14:22.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:22 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "snap_name": "1baf90d0-bd7a-42a7-ac36-9e7442037f97", "format": "json"}]: dispatch Oct 5 06:14:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:14:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:14:22 localhost nova_compute[297130]: 2025-10-05 10:14:22.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:22 localhost nova_compute[297130]: 2025-10-05 10:14:22.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:14:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 107 KiB/s wr, 8 op/s Oct 5 06:14:23 localhost podman[339032]: 2025-10-05 10:14:23.895276359 +0000 UTC m=+0.067976794 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:14:23 localhost podman[339032]: 2025-10-05 10:14:23.904278084 +0000 UTC m=+0.076978549 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:23 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:14:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.293 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.294 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.294 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.295 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:14:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "709609b0-dfb7-448a-b82a-bf98fc525c86", "format": "json"}]: dispatch Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:709609b0-dfb7-448a-b82a-bf98fc525c86, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:709609b0-dfb7-448a-b82a-bf98fc525c86, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:25 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '709609b0-dfb7-448a-b82a-bf98fc525c86' of type subvolume Oct 5 06:14:25 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:25.356+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '709609b0-dfb7-448a-b82a-bf98fc525c86' of type subvolume Oct 5 06:14:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "709609b0-dfb7-448a-b82a-bf98fc525c86", "force": true, "format": "json"}]: dispatch Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:709609b0-dfb7-448a-b82a-bf98fc525c86, vol_name:cephfs) < "" Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/709609b0-dfb7-448a-b82a-bf98fc525c86'' moved to trashcan Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:709609b0-dfb7-448a-b82a-bf98fc525c86, vol_name:cephfs) < "" Oct 5 06:14:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "snap_name": "1baf90d0-bd7a-42a7-ac36-9e7442037f97", "target_sub_name": "2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50", "format": "json"}]: dispatch Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, target_sub_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, vol_name:cephfs) < "" Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta' Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 9f0b381d-96f2-47b4-957b-aed4f7d5b9b7 for path b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50' Oct 5 06:14:25 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:14:25 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1497736329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta' Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, target_sub_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, vol_name:cephfs) < "" Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50 Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50) Oct 5 06:14:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50", "format": "json"}]: dispatch Oct 5 06:14:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.796 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 107 KiB/s wr, 8 op/s Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.970 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.973 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11492MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.973 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:14:25 localhost nova_compute[297130]: 2025-10-05 10:14:25.973 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:14:26 localhost podman[248157]: time="2025-10-05T10:14:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:14:26 localhost podman[248157]: @ - - [05/Oct/2025:10:14:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.038 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.039 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:14:26 localhost podman[248157]: @ - - [05/Oct/2025:10:14:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19373 "" "Go-http-client/1.1" Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.074 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:14:26 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:14:26 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3224327789' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.569 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.577 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.604 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.606 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:14:26 localhost nova_compute[297130]: 2025-10-05 10:14:26.607 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.608 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.609 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.609 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.632 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.633 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:27 localhost nova_compute[297130]: 2025-10-05 10:14:27.633 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 945 B/s rd, 128 KiB/s wr, 10 op/s Oct 5 06:14:28 localhost nova_compute[297130]: 2025-10-05 10:14:28.292 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:29 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50) -- by 0 seconds Oct 5 06:14:29 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' Oct 5 06:14:29 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta' Oct 5 06:14:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:29 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, vol_name:cephfs) < "" Oct 5 06:14:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 67 KiB/s wr, 6 op/s Oct 5 06:14:30 localhost nova_compute[297130]: 2025-10-05 10:14:30.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:30 localhost nova_compute[297130]: 2025-10-05 10:14:30.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:14:30 localhost nova_compute[297130]: 2025-10-05 10:14:30.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.snap/1baf90d0-bd7a-42a7-ac36-9e7442037f97/9e500109-4c33-48f6-8cf2-71b336ea2fc2' to b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/2b842776-97cb-4d32-a806-24d21ff90683' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f/.meta.tmp' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f/.meta.tmp' to config b'/volumes/_nogroup/f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f/.meta' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, vol_name:cephfs) < "" Oct 5 06:14:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f", "format": "json"}]: dispatch Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, vol_name:cephfs) < "" Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta' Oct 5 06:14:30 localhost nova_compute[297130]: 2025-10-05 10:14:30.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.clone_index] untracking 9f0b381d-96f2-47b4-957b-aed4f7d5b9b7 Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta.tmp' to config b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50/.meta' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50) Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, vol_name:cephfs) < "" Oct 5 06:14:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1456c2b1-f2b3-4696-970f-98de5d9e52dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, vol_name:cephfs) < "" Oct 5 06:14:30 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:14:30 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:14:30 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:14:30 localhost podman[339109]: 2025-10-05 10:14:30.805181341 +0000 UTC m=+0.064959820 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1456c2b1-f2b3-4696-970f-98de5d9e52dc/.meta.tmp' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1456c2b1-f2b3-4696-970f-98de5d9e52dc/.meta.tmp' to config b'/volumes/_nogroup/1456c2b1-f2b3-4696-970f-98de5d9e52dc/.meta' Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, vol_name:cephfs) < "" Oct 5 06:14:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1456c2b1-f2b3-4696-970f-98de5d9e52dc", "format": "json"}]: dispatch Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, vol_name:cephfs) < "" Oct 5 06:14:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, vol_name:cephfs) < "" Oct 5 06:14:30 localhost nova_compute[297130]: 2025-10-05 10:14:30.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 67 KiB/s wr, 6 op/s Oct 5 06:14:32 localhost nova_compute[297130]: 2025-10-05 10:14:32.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f", "format": "json"}]: dispatch Oct 5 06:14:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:32 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:32.382+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f' of type subvolume Oct 5 06:14:32 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f' of type subvolume Oct 5 06:14:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f", "force": true, "format": "json"}]: dispatch Oct 5 06:14:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, vol_name:cephfs) < "" Oct 5 06:14:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f'' moved to trashcan Oct 5 06:14:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f1a578b5-99de-4cef-bc36-d4ca4d5a5c2f, vol_name:cephfs) < "" Oct 5 06:14:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:14:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:14:32 localhost podman[339130]: 2025-10-05 10:14:32.905627293 +0000 UTC m=+0.068351813 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd) Oct 5 06:14:32 localhost podman[339130]: 2025-10-05 10:14:32.937921242 +0000 UTC m=+0.100645772 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:32 localhost systemd[1]: tmp-crun.dXHSo7.mount: Deactivated successfully. Oct 5 06:14:32 localhost podman[339131]: 2025-10-05 10:14:32.961579649 +0000 UTC m=+0.120111165 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:14:32 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:14:33 localhost podman[339131]: 2025-10-05 10:14:33.044394989 +0000 UTC m=+0.202926585 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:14:33 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:14:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 108 KiB/s wr, 10 op/s Oct 5 06:14:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1456c2b1-f2b3-4696-970f-98de5d9e52dc", "format": "json"}]: dispatch Oct 5 06:14:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:34 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:34.528+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1456c2b1-f2b3-4696-970f-98de5d9e52dc' of type subvolume Oct 5 06:14:34 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1456c2b1-f2b3-4696-970f-98de5d9e52dc' of type subvolume Oct 5 06:14:34 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1456c2b1-f2b3-4696-970f-98de5d9e52dc", "force": true, "format": "json"}]: dispatch Oct 5 06:14:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, vol_name:cephfs) < "" Oct 5 06:14:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1456c2b1-f2b3-4696-970f-98de5d9e52dc'' moved to trashcan Oct 5 06:14:34 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:34 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1456c2b1-f2b3-4696-970f-98de5d9e52dc, vol_name:cephfs) < "" Oct 5 06:14:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "012e607f-413a-4d97-bdcd-efa4faee2182", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:012e607f-413a-4d97-bdcd-efa4faee2182, vol_name:cephfs) < "" Oct 5 06:14:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/012e607f-413a-4d97-bdcd-efa4faee2182/.meta.tmp' Oct 5 06:14:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/012e607f-413a-4d97-bdcd-efa4faee2182/.meta.tmp' to config b'/volumes/_nogroup/012e607f-413a-4d97-bdcd-efa4faee2182/.meta' Oct 5 06:14:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:012e607f-413a-4d97-bdcd-efa4faee2182, vol_name:cephfs) < "" Oct 5 06:14:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "012e607f-413a-4d97-bdcd-efa4faee2182", "format": "json"}]: dispatch Oct 5 06:14:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:012e607f-413a-4d97-bdcd-efa4faee2182, vol_name:cephfs) < "" Oct 5 06:14:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:012e607f-413a-4d97-bdcd-efa4faee2182, vol_name:cephfs) < "" Oct 5 06:14:35 localhost nova_compute[297130]: 2025-10-05 10:14:35.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 68 KiB/s wr, 6 op/s Oct 5 06:14:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:14:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:14:36 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:14:36.904 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:14:36Z, description=, device_id=f142f274-ac09-450d-871a-7fa087bf681c, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8b949658-42b9-4c9b-8a20-0ce1e8b13aa2, ip_allocation=immediate, mac_address=fa:16:3e:c5:c4:6b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3784, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:14:36Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:14:36 localhost systemd[1]: tmp-crun.ueFVsH.mount: Deactivated successfully. Oct 5 06:14:36 localhost podman[339172]: 2025-10-05 10:14:36.928080931 +0000 UTC m=+0.090896230 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:14:36 localhost podman[339172]: 2025-10-05 10:14:36.937086993 +0000 UTC m=+0.099902362 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3) Oct 5 06:14:36 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:14:37 localhost podman[339173]: 2025-10-05 10:14:37.033205901 +0000 UTC m=+0.191668772 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 06:14:37 localhost nova_compute[297130]: 2025-10-05 10:14:37.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:37 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:14:37 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:14:37 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:14:37 localhost podman[339229]: 2025-10-05 10:14:37.134268122 +0000 UTC m=+0.071763462 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:14:37 localhost podman[339173]: 2025-10-05 10:14:37.177439675 +0000 UTC m=+0.335902746 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller) Oct 5 06:14:37 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:14:37 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:14:37.348 271653 INFO neutron.agent.dhcp.agent [None req-cf5333c0-a126-410c-a7be-7451aeb4f677 - - - - - -] DHCP configuration for ports {'8b949658-42b9-4c9b-8a20-0ce1e8b13aa2'} is completed#033[00m Oct 5 06:14:37 localhost nova_compute[297130]: 2025-10-05 10:14:37.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 94 KiB/s wr, 9 op/s Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.887 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:14:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:14:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "012e607f-413a-4d97-bdcd-efa4faee2182", "format": "json"}]: dispatch Oct 5 06:14:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:012e607f-413a-4d97-bdcd-efa4faee2182, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:012e607f-413a-4d97-bdcd-efa4faee2182, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:39 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:39.081+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '012e607f-413a-4d97-bdcd-efa4faee2182' of type subvolume Oct 5 06:14:39 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '012e607f-413a-4d97-bdcd-efa4faee2182' of type subvolume Oct 5 06:14:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "012e607f-413a-4d97-bdcd-efa4faee2182", "force": true, "format": "json"}]: dispatch Oct 5 06:14:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:012e607f-413a-4d97-bdcd-efa4faee2182, vol_name:cephfs) < "" Oct 5 06:14:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/012e607f-413a-4d97-bdcd-efa4faee2182'' moved to trashcan Oct 5 06:14:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:012e607f-413a-4d97-bdcd-efa4faee2182, vol_name:cephfs) < "" Oct 5 06:14:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 68 KiB/s wr, 7 op/s Oct 5 06:14:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:14:40 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:14:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:14:40 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:14:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:14:40 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev d1094218-4655-4172-97a0-ec78ec5dfb44 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:14:40 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev d1094218-4655-4172-97a0-ec78ec5dfb44 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:14:40 localhost ceph-mgr[301363]: [progress INFO root] Completed event d1094218-4655-4172-97a0-ec78ec5dfb44 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:14:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:14:40 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:14:40 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:14:40 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:14:40 localhost nova_compute[297130]: 2025-10-05 10:14:40.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:40 localhost nova_compute[297130]: 2025-10-05 10:14:40.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:14:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:14:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:14:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:14:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:14:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:14:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 68 KiB/s wr, 7 op/s Oct 5 06:14:42 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:14:42 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:14:42 localhost nova_compute[297130]: 2025-10-05 10:14:42.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57020aab-3cf4-4819-8c78-e7e770bc836d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57020aab-3cf4-4819-8c78-e7e770bc836d, vol_name:cephfs) < "" Oct 5 06:14:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57020aab-3cf4-4819-8c78-e7e770bc836d/.meta.tmp' Oct 5 06:14:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57020aab-3cf4-4819-8c78-e7e770bc836d/.meta.tmp' to config b'/volumes/_nogroup/57020aab-3cf4-4819-8c78-e7e770bc836d/.meta' Oct 5 06:14:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57020aab-3cf4-4819-8c78-e7e770bc836d, vol_name:cephfs) < "" Oct 5 06:14:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57020aab-3cf4-4819-8c78-e7e770bc836d", "format": "json"}]: dispatch Oct 5 06:14:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57020aab-3cf4-4819-8c78-e7e770bc836d, vol_name:cephfs) < "" Oct 5 06:14:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57020aab-3cf4-4819-8c78-e7e770bc836d, vol_name:cephfs) < "" Oct 5 06:14:43 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:14:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 89 KiB/s wr, 9 op/s Oct 5 06:14:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:14:45 localhost nova_compute[297130]: 2025-10-05 10:14:45.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:45 localhost nova_compute[297130]: 2025-10-05 10:14:45.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 47 KiB/s wr, 5 op/s Oct 5 06:14:45 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:14:45 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:14:45 localhost podman[339357]: 2025-10-05 10:14:45.923986492 +0000 UTC m=+0.056065581 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Oct 5 06:14:45 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:14:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57020aab-3cf4-4819-8c78-e7e770bc836d", "format": "json"}]: dispatch Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57020aab-3cf4-4819-8c78-e7e770bc836d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57020aab-3cf4-4819-8c78-e7e770bc836d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:45 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:45.984+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57020aab-3cf4-4819-8c78-e7e770bc836d' of type subvolume Oct 5 06:14:45 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57020aab-3cf4-4819-8c78-e7e770bc836d' of type subvolume Oct 5 06:14:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57020aab-3cf4-4819-8c78-e7e770bc836d", "force": true, "format": "json"}]: dispatch Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57020aab-3cf4-4819-8c78-e7e770bc836d, vol_name:cephfs) < "" Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57020aab-3cf4-4819-8c78-e7e770bc836d'' moved to trashcan Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57020aab-3cf4-4819-8c78-e7e770bc836d, vol_name:cephfs) < "" Oct 5 06:14:46 localhost openstack_network_exporter[250246]: ERROR 10:14:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:46 localhost openstack_network_exporter[250246]: ERROR 10:14:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:14:46 localhost openstack_network_exporter[250246]: ERROR 10:14:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:14:46 localhost openstack_network_exporter[250246]: ERROR 10:14:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:14:46 localhost openstack_network_exporter[250246]: Oct 5 06:14:46 localhost openstack_network_exporter[250246]: ERROR 10:14:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:14:46 localhost openstack_network_exporter[250246]: Oct 5 06:14:47 localhost nova_compute[297130]: 2025-10-05 10:14:47.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:14:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:14:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 70 KiB/s wr, 6 op/s Oct 5 06:14:47 localhost podman[339377]: 2025-10-05 10:14:47.899721134 +0000 UTC m=+0.071817235 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS) Oct 5 06:14:47 localhost podman[339377]: 2025-10-05 10:14:47.910093664 +0000 UTC m=+0.082189735 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true) Oct 5 06:14:47 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:14:47 localhost podman[339378]: 2025-10-05 10:14:47.966209845 +0000 UTC m=+0.131660636 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:14:48 localhost podman[339378]: 2025-10-05 10:14:48.00431632 +0000 UTC m=+0.169767091 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:14:48 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:14:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "8748e4f6-38a8-42cb-9f01-acf698f58541", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:14:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:14:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:14:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:14:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 44 KiB/s wr, 2 op/s Oct 5 06:14:49 localhost systemd[1]: tmp-crun.FvpTzE.mount: Deactivated successfully. Oct 5 06:14:49 localhost podman[339419]: 2025-10-05 10:14:49.923399999 +0000 UTC m=+0.090903960 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7) Oct 5 06:14:49 localhost podman[339419]: 2025-10-05 10:14:49.964122725 +0000 UTC m=+0.131626686 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, name=ubi9-minimal, build-date=2025-08-20T13:12:41) Oct 5 06:14:49 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:14:50 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bd879934-a84d-43e2-984d-2893bc73f198", "snap_name": "42fd7562-11a4-4198-bbca-2b4ad7801fe7_cff48581-31a3-43b0-8779-8b026d5dff4e", "force": true, "format": "json"}]: dispatch Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42fd7562-11a4-4198-bbca-2b4ad7801fe7_cff48581-31a3-43b0-8779-8b026d5dff4e, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta.tmp' Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta.tmp' to config b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta' Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42fd7562-11a4-4198-bbca-2b4ad7801fe7_cff48581-31a3-43b0-8779-8b026d5dff4e, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:50 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bd879934-a84d-43e2-984d-2893bc73f198", "snap_name": "42fd7562-11a4-4198-bbca-2b4ad7801fe7", "force": true, "format": "json"}]: dispatch Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42fd7562-11a4-4198-bbca-2b4ad7801fe7, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta.tmp' Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta.tmp' to config b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198/.meta' Oct 5 06:14:50 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42fd7562-11a4-4198-bbca-2b4ad7801fe7, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:50 localhost nova_compute[297130]: 2025-10-05 10:14:50.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 44 KiB/s wr, 2 op/s Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.091124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292091213, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1148, "num_deletes": 258, "total_data_size": 1217865, "memory_usage": 1245080, "flush_reason": "Manual Compaction"} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292100074, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 795354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32513, "largest_seqno": 33656, "table_properties": {"data_size": 790404, "index_size": 2357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12438, "raw_average_key_size": 20, "raw_value_size": 779672, "raw_average_value_size": 1288, "num_data_blocks": 103, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659237, "oldest_key_time": 1759659237, "file_creation_time": 1759659292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9008 microseconds, and 3476 cpu microseconds. Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.100141) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 795354 bytes OK Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.100164) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.102614) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.102636) EVENT_LOG_v1 {"time_micros": 1759659292102629, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.102657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1211983, prev total WAL file size 1212307, number of live WAL files 2. Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.103662) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353136' seq:72057594037927935, type:22 .. '6C6F676D0034373638' seq:0, type:0; will stop at (end) Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(776KB)], [51(15MB)] Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292103745, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 17466012, "oldest_snapshot_seqno": -1} Oct 5 06:14:52 localhost nova_compute[297130]: 2025-10-05 10:14:52.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "84306b34-8068-40c7-8bbd-d5980283ac11", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84306b34-8068-40c7-8bbd-d5980283ac11, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14450 keys, 17272790 bytes, temperature: kUnknown Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292206706, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 17272790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17190521, "index_size": 45048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36165, "raw_key_size": 388937, "raw_average_key_size": 26, "raw_value_size": 16945373, "raw_average_value_size": 1172, "num_data_blocks": 1665, "num_entries": 14450, "num_filter_entries": 14450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759659292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.206974) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 17272790 bytes Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.209269) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.5 rd, 167.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.9 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(43.7) write-amplify(21.7) OK, records in: 14990, records dropped: 540 output_compression: NoCompression Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.209288) EVENT_LOG_v1 {"time_micros": 1759659292209279, "job": 30, "event": "compaction_finished", "compaction_time_micros": 103046, "compaction_time_cpu_micros": 52247, "output_level": 6, "num_output_files": 1, "total_output_size": 17272790, "num_input_records": 14990, "num_output_records": 14450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292209494, "job": 30, "event": "table_file_deletion", "file_number": 53} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659292211114, "job": 30, "event": "table_file_deletion", "file_number": 51} Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.103498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.211251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.211262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.211266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.211277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:14:52.211281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/84306b34-8068-40c7-8bbd-d5980283ac11/.meta.tmp' Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/84306b34-8068-40c7-8bbd-d5980283ac11/.meta.tmp' to config b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/84306b34-8068-40c7-8bbd-d5980283ac11/.meta' Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84306b34-8068-40c7-8bbd-d5980283ac11, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "84306b34-8068-40c7-8bbd-d5980283ac11", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume getpath, sub_name:84306b34-8068-40c7-8bbd-d5980283ac11, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume getpath, sub_name:84306b34-8068-40c7-8bbd-d5980283ac11, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "69213b57-6988-4376-962e-d6e9464e419c", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:69213b57-6988-4376-962e-d6e9464e419c, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/69213b57-6988-4376-962e-d6e9464e419c/.meta.tmp' Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/69213b57-6988-4376-962e-d6e9464e419c/.meta.tmp' to config b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/69213b57-6988-4376-962e-d6e9464e419c/.meta' Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:69213b57-6988-4376-962e-d6e9464e419c, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "69213b57-6988-4376-962e-d6e9464e419c", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume getpath, sub_name:69213b57-6988-4376-962e-d6e9464e419c, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume getpath, sub_name:69213b57-6988-4376-962e-d6e9464e419c, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e7b7bcf0-4020-4086-b097-bf8fa1c94e10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "8748e4f6-38a8-42cb-9f01-acf698f58541", "format": "json"}]: dispatch Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/8748e4f6-38a8-42cb-9f01-acf698f58541/e7b7bcf0-4020-4086-b097-bf8fa1c94e10/.meta.tmp' Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/8748e4f6-38a8-42cb-9f01-acf698f58541/e7b7bcf0-4020-4086-b097-bf8fa1c94e10/.meta.tmp' to config b'/volumes/8748e4f6-38a8-42cb-9f01-acf698f58541/e7b7bcf0-4020-4086-b097-bf8fa1c94e10/.meta' Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e7b7bcf0-4020-4086-b097-bf8fa1c94e10", "group_name": "8748e4f6-38a8-42cb-9f01-acf698f58541", "format": "json"}]: dispatch Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs subvolume getpath, sub_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, vol_name:cephfs) < "" Oct 5 06:14:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs subvolume getpath, sub_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, vol_name:cephfs) < "" Oct 5 06:14:53 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd879934-a84d-43e2-984d-2893bc73f198", "format": "json"}]: dispatch Oct 5 06:14:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd879934-a84d-43e2-984d-2893bc73f198, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd879934-a84d-43e2-984d-2893bc73f198, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:14:53 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:14:53.873+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd879934-a84d-43e2-984d-2893bc73f198' of type subvolume Oct 5 06:14:53 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd879934-a84d-43e2-984d-2893bc73f198' of type subvolume Oct 5 06:14:53 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd879934-a84d-43e2-984d-2893bc73f198", "force": true, "format": "json"}]: dispatch Oct 5 06:14:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:53 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd879934-a84d-43e2-984d-2893bc73f198'' moved to trashcan Oct 5 06:14:53 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:14:53 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd879934-a84d-43e2-984d-2893bc73f198, vol_name:cephfs) < "" Oct 5 06:14:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 70 KiB/s wr, 5 op/s Oct 5 06:14:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:14:54 localhost podman[339439]: 2025-10-05 10:14:54.916528855 +0000 UTC m=+0.079777820 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Oct 5 06:14:54 localhost podman[339439]: 2025-10-05 10:14:54.925256389 +0000 UTC m=+0.088505384 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:14:54 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:14:55 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e261 e261: 6 total, 6 up, 6 in Oct 5 06:14:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 203 B/s rd, 60 KiB/s wr, 4 op/s Oct 5 06:14:55 localhost nova_compute[297130]: 2025-10-05 10:14:55.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:56 localhost podman[248157]: time="2025-10-05T10:14:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:14:56 localhost podman[248157]: @ - - [05/Oct/2025:10:14:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:14:56 localhost podman[248157]: @ - - [05/Oct/2025:10:14:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19369 "" "Go-http-client/1.1" Oct 5 06:14:57 localhost nova_compute[297130]: 2025-10-05 10:14:57.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 611 B/s rd, 56 KiB/s wr, 6 op/s Oct 5 06:14:58 localhost ovn_metadata_agent[163196]: 2025-10-05 10:14:58.398 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:14:58 localhost ovn_metadata_agent[163196]: 2025-10-05 10:14:58.399 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:14:58 localhost nova_compute[297130]: 2025-10-05 10:14:58.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:14:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:14:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 611 B/s rd, 56 KiB/s wr, 6 op/s Oct 5 06:15:00 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "snap_name": "72a8d8fe-af79-494d-8d3f-c0f5392f2036", "force": true, "format": "json"}]: dispatch Oct 5 06:15:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolumegroup snapshot rm, snap_name:72a8d8fe-af79-494d-8d3f-c0f5392f2036, vol_name:cephfs) < "" Oct 5 06:15:00 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolumegroup snapshot rm, snap_name:72a8d8fe-af79-494d-8d3f-c0f5392f2036, vol_name:cephfs) < "" Oct 5 06:15:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:15:00 localhost ceph-osd[31524]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 19K writes, 78K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 19K writes, 7034 syncs, 2.78 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 41.91 MB, 0.07 MB/s#012Interval WAL: 11K writes, 5188 syncs, 2.28 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:15:00 localhost nova_compute[297130]: 2025-10-05 10:15:00.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 611 B/s rd, 56 KiB/s wr, 6 op/s Oct 5 06:15:02 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e262 e262: 6 total, 6 up, 6 in Oct 5 06:15:02 localhost nova_compute[297130]: 2025-10-05 10:15:02.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:15:02.400 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:15:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:15:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:15:03 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e7b7bcf0-4020-4086-b097-bf8fa1c94e10", "group_name": "8748e4f6-38a8-42cb-9f01-acf698f58541", "format": "json"}]: dispatch Oct 5 06:15:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:03 localhost podman[339458]: 2025-10-05 10:15:03.904856864 +0000 UTC m=+0.066713827 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:15:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:03 localhost podman[339458]: 2025-10-05 10:15:03.912173911 +0000 UTC m=+0.074030884 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:15:03 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:03.911+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e7b7bcf0-4020-4086-b097-bf8fa1c94e10' of type subvolume Oct 5 06:15:03 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e7b7bcf0-4020-4086-b097-bf8fa1c94e10' of type subvolume Oct 5 06:15:03 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e7b7bcf0-4020-4086-b097-bf8fa1c94e10", "force": true, "group_name": "8748e4f6-38a8-42cb-9f01-acf698f58541", "format": "json"}]: dispatch Oct 5 06:15:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs subvolume rm, sub_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, vol_name:cephfs) < "" Oct 5 06:15:03 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/8748e4f6-38a8-42cb-9f01-acf698f58541/e7b7bcf0-4020-4086-b097-bf8fa1c94e10'' moved to trashcan Oct 5 06:15:03 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:03 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs subvolume rm, sub_name:e7b7bcf0-4020-4086-b097-bf8fa1c94e10, vol_name:cephfs) < "" Oct 5 06:15:03 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:15:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 473 B/s rd, 49 KiB/s wr, 4 op/s Oct 5 06:15:03 localhost systemd[1]: tmp-crun.HDK8Dd.mount: Deactivated successfully. Oct 5 06:15:03 localhost podman[339457]: 2025-10-05 10:15:03.977079889 +0000 UTC m=+0.138968223 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:15:04 localhost podman[339457]: 2025-10-05 10:15:04.013593042 +0000 UTC m=+0.175481376 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_id=multipathd) Oct 5 06:15:04 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:15:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:15:05 localhost ceph-osd[32468]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 88K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 23K writes, 7982 syncs, 2.89 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 30.23 MB, 0.05 MB/s#012Interval WAL: 13K writes, 5786 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Oct 5 06:15:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 42 KiB/s wr, 3 op/s Oct 5 06:15:06 localhost nova_compute[297130]: 2025-10-05 10:15:06.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "69213b57-6988-4376-962e-d6e9464e419c", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:69213b57-6988-4376-962e-d6e9464e419c, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:69213b57-6988-4376-962e-d6e9464e419c, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:07 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:07.044+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '69213b57-6988-4376-962e-d6e9464e419c' of type subvolume Oct 5 06:15:07 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '69213b57-6988-4376-962e-d6e9464e419c' of type subvolume Oct 5 06:15:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "69213b57-6988-4376-962e-d6e9464e419c", "force": true, "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume rm, sub_name:69213b57-6988-4376-962e-d6e9464e419c, vol_name:cephfs) < "" Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/69213b57-6988-4376-962e-d6e9464e419c'' moved to trashcan Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume rm, sub_name:69213b57-6988-4376-962e-d6e9464e419c, vol_name:cephfs) < "" Oct 5 06:15:07 localhost nova_compute[297130]: 2025-10-05 10:15:07.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:15:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta.tmp' Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta.tmp' to config b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta' Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:07 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "format": "json"}]: dispatch Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:07 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:07 localhost podman[339500]: 2025-10-05 10:15:07.900575361 +0000 UTC m=+0.066046870 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible) Oct 5 06:15:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 2 op/s Oct 5 06:15:07 localhost podman[339500]: 2025-10-05 10:15:07.956562459 +0000 UTC m=+0.122033918 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:15:07 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:15:08 localhost podman[339499]: 2025-10-05 10:15:07.961740689 +0000 UTC m=+0.127767362 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid) Oct 5 06:15:08 localhost podman[339499]: 2025-10-05 10:15:08.045278207 +0000 UTC m=+0.211304920 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:15:08 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:15:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 2 op/s Oct 5 06:15:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "84306b34-8068-40c7-8bbd-d5980283ac11", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:15:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:84306b34-8068-40c7-8bbd-d5980283ac11, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:84306b34-8068-40c7-8bbd-d5980283ac11, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:10 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:10.297+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84306b34-8068-40c7-8bbd-d5980283ac11' of type subvolume Oct 5 06:15:10 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84306b34-8068-40c7-8bbd-d5980283ac11' of type subvolume Oct 5 06:15:10 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "84306b34-8068-40c7-8bbd-d5980283ac11", "force": true, "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "format": "json"}]: dispatch Oct 5 06:15:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume rm, sub_name:84306b34-8068-40c7-8bbd-d5980283ac11, vol_name:cephfs) < "" Oct 5 06:15:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/e72e0425-f767-4496-8b1b-e84ec8b454da/84306b34-8068-40c7-8bbd-d5980283ac11'' moved to trashcan Oct 5 06:15:10 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:10 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolume rm, sub_name:84306b34-8068-40c7-8bbd-d5980283ac11, vol_name:cephfs) < "" Oct 5 06:15:11 localhost nova_compute[297130]: 2025-10-05 10:15:11.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "snap_name": "f8cfd2da-4feb-4b45-b344-adf18bde0dd4", "format": "json"}]: dispatch Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f8cfd2da-4feb-4b45-b344-adf18bde0dd4, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f8cfd2da-4feb-4b45-b344-adf18bde0dd4, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:15:11 Oct 5 06:15:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:15:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:15:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_data', 'backups', 'volumes', 'vms', 'images', '.mgr', 'manila_metadata'] Oct 5 06:15:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:15:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:15:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 2 op/s Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:15:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.001815991851619207 of space, bias 4.0, pg target 1.445529513888889 quantized to 16 (current 16) Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:15:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:15:12 localhost nova_compute[297130]: 2025-10-05 10:15:12.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "8748e4f6-38a8-42cb-9f01-acf698f58541", "force": true, "format": "json"}]: dispatch Oct 5 06:15:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:15:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:8748e4f6-38a8-42cb-9f01-acf698f58541, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:15:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 432 B/s rd, 58 KiB/s wr, 4 op/s Oct 5 06:15:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "snap_name": "f8cfd2da-4feb-4b45-b344-adf18bde0dd4_02247a12-b2e0-4fe7-9de5-45d7e104f34e", "force": true, "format": "json"}]: dispatch Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f8cfd2da-4feb-4b45-b344-adf18bde0dd4_02247a12-b2e0-4fe7-9de5-45d7e104f34e, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta.tmp' Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta.tmp' to config b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta' Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f8cfd2da-4feb-4b45-b344-adf18bde0dd4_02247a12-b2e0-4fe7-9de5-45d7e104f34e, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:14 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "snap_name": "f8cfd2da-4feb-4b45-b344-adf18bde0dd4", "force": true, "format": "json"}]: dispatch Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f8cfd2da-4feb-4b45-b344-adf18bde0dd4, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta.tmp' Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta.tmp' to config b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e/.meta' Oct 5 06:15:14 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f8cfd2da-4feb-4b45-b344-adf18bde0dd4, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 42 KiB/s wr, 3 op/s Oct 5 06:15:16 localhost nova_compute[297130]: 2025-10-05 10:15:16.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:16 localhost ovn_controller[157556]: 2025-10-05T10:15:16Z|00202|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Oct 5 06:15:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e72e0425-f767-4496-8b1b-e84ec8b454da", "force": true, "format": "json"}]: dispatch Oct 5 06:15:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:15:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e72e0425-f767-4496-8b1b-e84ec8b454da, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:15:16 localhost openstack_network_exporter[250246]: ERROR 10:15:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:15:16 localhost openstack_network_exporter[250246]: ERROR 10:15:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:15:16 localhost openstack_network_exporter[250246]: Oct 5 06:15:16 localhost openstack_network_exporter[250246]: ERROR 10:15:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:15:16 localhost openstack_network_exporter[250246]: Oct 5 06:15:16 localhost openstack_network_exporter[250246]: ERROR 10:15:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:16 localhost openstack_network_exporter[250246]: ERROR 10:15:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:17 localhost nova_compute[297130]: 2025-10-05 10:15:17.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "format": "json"}]: dispatch Oct 5 06:15:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:17 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:17.831+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '83b2c5d4-1a20-4060-9735-d1a707e8220e' of type subvolume Oct 5 06:15:17 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '83b2c5d4-1a20-4060-9735-d1a707e8220e' of type subvolume Oct 5 06:15:17 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "83b2c5d4-1a20-4060-9735-d1a707e8220e", "force": true, "format": "json"}]: dispatch Oct 5 06:15:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/83b2c5d4-1a20-4060-9735-d1a707e8220e'' moved to trashcan Oct 5 06:15:17 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:17 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:83b2c5d4-1a20-4060-9735-d1a707e8220e, vol_name:cephfs) < "" Oct 5 06:15:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 57 KiB/s wr, 6 op/s Oct 5 06:15:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:15:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:15:18 localhost systemd[1]: tmp-crun.TVFHL4.mount: Deactivated successfully. Oct 5 06:15:18 localhost systemd[1]: tmp-crun.dK6hfE.mount: Deactivated successfully. Oct 5 06:15:18 localhost podman[339544]: 2025-10-05 10:15:18.923248252 +0000 UTC m=+0.089760039 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:15:18 localhost podman[339544]: 2025-10-05 10:15:18.931982658 +0000 UTC m=+0.098494425 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:15:18 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:15:18 localhost podman[339543]: 2025-10-05 10:15:18.899702488 +0000 UTC m=+0.075085094 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Oct 5 06:15:18 localhost podman[339543]: 2025-10-05 10:15:18.983260238 +0000 UTC m=+0.158642814 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:15:18 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:15:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 41 KiB/s wr, 4 op/s Oct 5 06:15:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50", "format": "json"}]: dispatch Oct 5 06:15:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:20 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e263 e263: 6 total, 6 up, 6 in Oct 5 06:15:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:15:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:15:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:15:20.410 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:15:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:15:20.411 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:15:20 localhost podman[339583]: 2025-10-05 10:15:20.913271589 +0000 UTC m=+0.077116008 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64) Oct 5 06:15:20 localhost podman[339583]: 2025-10-05 10:15:20.929178458 +0000 UTC m=+0.093022927 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter) Oct 5 06:15:20 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:15:21 localhost nova_compute[297130]: 2025-10-05 10:15:21.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 50 KiB/s wr, 5 op/s Oct 5 06:15:22 localhost nova_compute[297130]: 2025-10-05 10:15:22.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:23 localhost nova_compute[297130]: 2025-10-05 10:15:23.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 52 KiB/s wr, 4 op/s Oct 5 06:15:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:24 localhost nova_compute[297130]: 2025-10-05 10:15:24.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50", "format": "json"}]: dispatch Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, vol_name:cephfs) < "" Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, vol_name:cephfs) < "" Oct 5 06:15:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta.tmp' Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta.tmp' to config b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta' Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:24 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "format": "json"}]: dispatch Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:24 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:15:25 localhost podman[339603]: 2025-10-05 10:15:25.147127159 +0000 UTC m=+0.079115172 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:25 localhost podman[339603]: 2025-10-05 10:15:25.184142655 +0000 UTC m=+0.116130648 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Oct 5 06:15:25 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:15:25 localhost nova_compute[297130]: 2025-10-05 10:15:25.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 52 KiB/s wr, 4 op/s Oct 5 06:15:26 localhost podman[248157]: time="2025-10-05T10:15:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:15:26 localhost podman[248157]: @ - - [05/Oct/2025:10:15:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:15:26 localhost podman[248157]: @ - - [05/Oct/2025:10:15:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19385 "" "Go-http-client/1.1" Oct 5 06:15:26 localhost nova_compute[297130]: 2025-10-05 10:15:26.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:26 localhost nova_compute[297130]: 2025-10-05 10:15:26.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50", "format": "json"}]: dispatch Oct 5 06:15:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50", "force": true, "format": "json"}]: dispatch Oct 5 06:15:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, vol_name:cephfs) < "" Oct 5 06:15:26 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50'' moved to trashcan Oct 5 06:15:26 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f4dd1f6-4d49-4aa7-bf52-e96e3e35fb50, vol_name:cephfs) < "" Oct 5 06:15:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e264 e264: 6 total, 6 up, 6 in Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.306 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.306 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.307 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.307 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.307 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:27 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "snap_name": "21116d31-5ee6-490a-a58b-88b0b14f9a91", "format": "json"}]: dispatch Oct 5 06:15:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:21116d31-5ee6-490a-a58b-88b0b14f9a91, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:27 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:21116d31-5ee6-490a-a58b-88b0b14f9a91, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:27 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:15:27 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2317025069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.745 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.909 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.910 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11463MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.910 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.910 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:15:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 80 KiB/s wr, 4 op/s Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.974 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.974 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:15:27 localhost nova_compute[297130]: 2025-10-05 10:15:27.990 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:15:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:15:28 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2008942509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:15:28 localhost nova_compute[297130]: 2025-10-05 10:15:28.436 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:15:28 localhost nova_compute[297130]: 2025-10-05 10:15:28.443 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:15:28 localhost nova_compute[297130]: 2025-10-05 10:15:28.466 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:15:28 localhost nova_compute[297130]: 2025-10-05 10:15:28.469 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:15:28 localhost nova_compute[297130]: 2025-10-05 10:15:28.470 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:15:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:29 localhost nova_compute[297130]: 2025-10-05 10:15:29.466 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:29 localhost nova_compute[297130]: 2025-10-05 10:15:29.467 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:29 localhost nova_compute[297130]: 2025-10-05 10:15:29.467 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:15:29 localhost nova_compute[297130]: 2025-10-05 10:15:29.468 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:15:29 localhost nova_compute[297130]: 2025-10-05 10:15:29.485 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:15:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 417 B/s rd, 65 KiB/s wr, 3 op/s Oct 5 06:15:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "snap_name": "1baf90d0-bd7a-42a7-ac36-9e7442037f97_7320d307-e380-4a91-816f-5fd2c1464a76", "force": true, "format": "json"}]: dispatch Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97_7320d307-e380-4a91-816f-5fd2c1464a76, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta' Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97_7320d307-e380-4a91-816f-5fd2c1464a76, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:15:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "snap_name": "1baf90d0-bd7a-42a7-ac36-9e7442037f97", "force": true, "format": "json"}]: dispatch Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta.tmp' to config b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689/.meta' Oct 5 06:15:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1baf90d0-bd7a-42a7-ac36-9e7442037f97, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:15:31 localhost nova_compute[297130]: 2025-10-05 10:15:31.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:31 localhost nova_compute[297130]: 2025-10-05 10:15:31.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:31 localhost nova_compute[297130]: 2025-10-05 10:15:31.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:15:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 64 KiB/s wr, 3 op/s Oct 5 06:15:31 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:31 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta.tmp' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta.tmp' to config b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "format": "json"}]: dispatch Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ce28d532-cf90-4a5c-a702-aa61974c6ec9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, vol_name:cephfs) < "" Oct 5 06:15:32 localhost nova_compute[297130]: 2025-10-05 10:15:32.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce28d532-cf90-4a5c-a702-aa61974c6ec9/.meta.tmp' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce28d532-cf90-4a5c-a702-aa61974c6ec9/.meta.tmp' to config b'/volumes/_nogroup/ce28d532-cf90-4a5c-a702-aa61974c6ec9/.meta' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ce28d532-cf90-4a5c-a702-aa61974c6ec9", "format": "json"}]: dispatch Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, vol_name:cephfs) < "" Oct 5 06:15:32 localhost nova_compute[297130]: 2025-10-05 10:15:32.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "snap_name": "21116d31-5ee6-490a-a58b-88b0b14f9a91_67ada5b6-22a7-41f9-9df3-68f1b8f8d752", "force": true, "format": "json"}]: dispatch Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:21116d31-5ee6-490a-a58b-88b0b14f9a91_67ada5b6-22a7-41f9-9df3-68f1b8f8d752, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta.tmp' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta.tmp' to config b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:21116d31-5ee6-490a-a58b-88b0b14f9a91_67ada5b6-22a7-41f9-9df3-68f1b8f8d752, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "snap_name": "21116d31-5ee6-490a-a58b-88b0b14f9a91", "force": true, "format": "json"}]: dispatch Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:21116d31-5ee6-490a-a58b-88b0b14f9a91, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta.tmp' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta.tmp' to config b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18/.meta' Oct 5 06:15:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:21116d31-5ee6-490a-a58b-88b0b14f9a91, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "format": "json"}]: dispatch Oct 5 06:15:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa829b98-ba06-4c96-b3d6-51a924b83689, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa829b98-ba06-4c96-b3d6-51a924b83689, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:33 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:33.378+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa829b98-ba06-4c96-b3d6-51a924b83689' of type subvolume Oct 5 06:15:33 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa829b98-ba06-4c96-b3d6-51a924b83689' of type subvolume Oct 5 06:15:33 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa829b98-ba06-4c96-b3d6-51a924b83689", "force": true, "format": "json"}]: dispatch Oct 5 06:15:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:15:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa829b98-ba06-4c96-b3d6-51a924b83689'' moved to trashcan Oct 5 06:15:33 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:33 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa829b98-ba06-4c96-b3d6-51a924b83689, vol_name:cephfs) < "" Oct 5 06:15:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 6 op/s Oct 5 06:15:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e265 e265: 6 total, 6 up, 6 in Oct 5 06:15:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:15:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:15:34 localhost systemd[1]: tmp-crun.yld8UK.mount: Deactivated successfully. Oct 5 06:15:34 localhost podman[339666]: 2025-10-05 10:15:34.899595124 +0000 UTC m=+0.064934110 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:15:34 localhost podman[339665]: 2025-10-05 10:15:34.953328741 +0000 UTC m=+0.114139615 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd) Oct 5 06:15:34 localhost podman[339665]: 2025-10-05 10:15:34.966124415 +0000 UTC m=+0.126935319 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:15:34 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:15:34 localhost podman[339666]: 2025-10-05 10:15:34.988389244 +0000 UTC m=+0.153728200 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:15:35 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:15:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "snap_name": "791e59d3-9042-4069-820c-cfc7fd43286f", "format": "json"}]: dispatch Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:791e59d3-9042-4069-820c-cfc7fd43286f, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:791e59d3-9042-4069-820c-cfc7fd43286f, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "format": "json"}]: dispatch Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:35.691+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bda52a7b-d17b-449e-a6c5-e195087f5f18' of type subvolume Oct 5 06:15:35 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bda52a7b-d17b-449e-a6c5-e195087f5f18' of type subvolume Oct 5 06:15:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bda52a7b-d17b-449e-a6c5-e195087f5f18", "force": true, "format": "json"}]: dispatch Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bda52a7b-d17b-449e-a6c5-e195087f5f18'' moved to trashcan Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bda52a7b-d17b-449e-a6c5-e195087f5f18, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c384eea-2cd7-4d7b-9aa0-3846d544271b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c384eea-2cd7-4d7b-9aa0-3846d544271b/.meta.tmp' Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c384eea-2cd7-4d7b-9aa0-3846d544271b/.meta.tmp' to config b'/volumes/_nogroup/8c384eea-2cd7-4d7b-9aa0-3846d544271b/.meta' Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c384eea-2cd7-4d7b-9aa0-3846d544271b", "format": "json"}]: dispatch Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, vol_name:cephfs) < "" Oct 5 06:15:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 579 B/s rd, 81 KiB/s wr, 7 op/s Oct 5 06:15:36 localhost nova_compute[297130]: 2025-10-05 10:15:36.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:37 localhost nova_compute[297130]: 2025-10-05 10:15:37.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 92 KiB/s wr, 8 op/s Oct 5 06:15:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:15:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:15:38 localhost podman[339708]: 2025-10-05 10:15:38.911938458 +0000 UTC m=+0.083000826 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid) Oct 5 06:15:38 localhost podman[339708]: 2025-10-05 10:15:38.918293459 +0000 UTC m=+0.089355797 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}) Oct 5 06:15:38 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:15:38 localhost podman[339709]: 2025-10-05 10:15:38.965919542 +0000 UTC m=+0.131433701 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:39 localhost podman[339709]: 2025-10-05 10:15:39.066333496 +0000 UTC m=+0.231847675 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:15:39 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:15:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd28b1c6-c804-4359-bc83-e4de454ba433", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd28b1c6-c804-4359-bc83-e4de454ba433, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cd28b1c6-c804-4359-bc83-e4de454ba433/.meta.tmp' Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cd28b1c6-c804-4359-bc83-e4de454ba433/.meta.tmp' to config b'/volumes/_nogroup/cd28b1c6-c804-4359-bc83-e4de454ba433/.meta' Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd28b1c6-c804-4359-bc83-e4de454ba433, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd28b1c6-c804-4359-bc83-e4de454ba433", "format": "json"}]: dispatch Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd28b1c6-c804-4359-bc83-e4de454ba433, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd28b1c6-c804-4359-bc83-e4de454ba433, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "snap_name": "791e59d3-9042-4069-820c-cfc7fd43286f_5121e2dd-04ac-4dbe-bc5b-c3000ba17136", "force": true, "format": "json"}]: dispatch Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:791e59d3-9042-4069-820c-cfc7fd43286f_5121e2dd-04ac-4dbe-bc5b-c3000ba17136, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta.tmp' Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta.tmp' to config b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta' Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:791e59d3-9042-4069-820c-cfc7fd43286f_5121e2dd-04ac-4dbe-bc5b-c3000ba17136, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "snap_name": "791e59d3-9042-4069-820c-cfc7fd43286f", "force": true, "format": "json"}]: dispatch Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:791e59d3-9042-4069-820c-cfc7fd43286f, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta.tmp' Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta.tmp' to config b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae/.meta' Oct 5 06:15:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:791e59d3-9042-4069-820c-cfc7fd43286f, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 92 KiB/s wr, 8 op/s Oct 5 06:15:41 localhost nova_compute[297130]: 2025-10-05 10:15:41.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:15:41 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:15:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:15:41 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:15:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:15:41 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev 6ba19fd5-a4c0-45be-95b5-23abd0c1fb98 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:15:41 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev 6ba19fd5-a4c0-45be-95b5-23abd0c1fb98 (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:15:41 localhost ceph-mgr[301363]: [progress INFO root] Completed event 6ba19fd5-a4c0-45be-95b5-23abd0c1fb98 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:15:41 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:15:41 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:15:41 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:15:41 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.455568) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341455642, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1016, "num_deletes": 253, "total_data_size": 1031965, "memory_usage": 1050312, "flush_reason": "Manual Compaction"} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341462970, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 672642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33661, "largest_seqno": 34672, "table_properties": {"data_size": 668320, "index_size": 1921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11346, "raw_average_key_size": 21, "raw_value_size": 659003, "raw_average_value_size": 1227, "num_data_blocks": 84, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759659292, "oldest_key_time": 1759659292, "file_creation_time": 1759659341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 7451 microseconds, and 3661 cpu microseconds. Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.463022) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 672642 bytes OK Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.463047) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.465047) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.465075) EVENT_LOG_v1 {"time_micros": 1759659341465067, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.465103) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1026724, prev total WAL file size 1026724, number of live WAL files 2. Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.465798) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(656KB)], [54(16MB)] Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341465847, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17945432, "oldest_snapshot_seqno": -1} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14462 keys, 16537515 bytes, temperature: kUnknown Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341562408, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 16537515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16456225, "index_size": 44069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36165, "raw_key_size": 390066, "raw_average_key_size": 26, "raw_value_size": 16211955, "raw_average_value_size": 1121, "num_data_blocks": 1618, "num_entries": 14462, "num_filter_entries": 14462, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759658248, "oldest_key_time": 0, "file_creation_time": 1759659341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "09f88e28-27a5-4ad9-a669-134d4123f6f8", "db_session_id": "F5HXXNFJ1JNSSRYMZ5WS", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.562705) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16537515 bytes Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564255) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.7 rd, 171.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 16.5 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(51.3) write-amplify(24.6) OK, records in: 14987, records dropped: 525 output_compression: NoCompression Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.564284) EVENT_LOG_v1 {"time_micros": 1759659341564271, "job": 32, "event": "compaction_finished", "compaction_time_micros": 96629, "compaction_time_cpu_micros": 47952, "output_level": 6, "num_output_files": 1, "total_output_size": 16537515, "num_input_records": 14987, "num_output_records": 14462, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341564512, "job": 32, "event": "table_file_deletion", "file_number": 56} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005471152/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759659341567053, "job": 32, "event": "table_file_deletion", "file_number": 54} Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.465680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.567172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.567177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.567181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.567184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mon[316511]: rocksdb: (Original Log Time 2025/10/05-10:15:41.567187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:15:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:15:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 92 KiB/s wr, 8 op/s Oct 5 06:15:42 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:15:42 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:15:42 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e266 e266: 6 total, 6 up, 6 in Oct 5 06:15:42 localhost nova_compute[297130]: 2025-10-05 10:15:42.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "format": "json"}]: dispatch Oct 5 06:15:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:42 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:42.846+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae' of type subvolume Oct 5 06:15:42 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae' of type subvolume Oct 5 06:15:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae", "force": true, "format": "json"}]: dispatch Oct 5 06:15:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae'' moved to trashcan Oct 5 06:15:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e6a526fe-b2bf-47a9-9d28-b9f8ba9948ae, vol_name:cephfs) < "" Oct 5 06:15:43 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:15:43 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e267 e267: 6 total, 6 up, 6 in Oct 5 06:15:43 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d87b9f30-bcfc-4589-aa58-0feed5ac49d4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, vol_name:cephfs) < "" Oct 5 06:15:43 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d87b9f30-bcfc-4589-aa58-0feed5ac49d4/.meta.tmp' Oct 5 06:15:43 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d87b9f30-bcfc-4589-aa58-0feed5ac49d4/.meta.tmp' to config b'/volumes/_nogroup/d87b9f30-bcfc-4589-aa58-0feed5ac49d4/.meta' Oct 5 06:15:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, vol_name:cephfs) < "" Oct 5 06:15:43 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d87b9f30-bcfc-4589-aa58-0feed5ac49d4", "format": "json"}]: dispatch Oct 5 06:15:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, vol_name:cephfs) < "" Oct 5 06:15:43 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, vol_name:cephfs) < "" Oct 5 06:15:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 110 KiB/s wr, 8 op/s Oct 5 06:15:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 4 op/s Oct 5 06:15:46 localhost nova_compute[297130]: 2025-10-05 10:15:46.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:46 localhost openstack_network_exporter[250246]: ERROR 10:15:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:46 localhost openstack_network_exporter[250246]: ERROR 10:15:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:15:46 localhost openstack_network_exporter[250246]: ERROR 10:15:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:15:46 localhost openstack_network_exporter[250246]: ERROR 10:15:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:15:46 localhost openstack_network_exporter[250246]: Oct 5 06:15:46 localhost openstack_network_exporter[250246]: ERROR 10:15:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:15:46 localhost openstack_network_exporter[250246]: Oct 5 06:15:47 localhost nova_compute[297130]: 2025-10-05 10:15:47.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 94 KiB/s wr, 7 op/s Oct 5 06:15:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d87b9f30-bcfc-4589-aa58-0feed5ac49d4", "format": "json"}]: dispatch Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd87b9f30-bcfc-4589-aa58-0feed5ac49d4' of type subvolume Oct 5 06:15:48 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:48.132+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd87b9f30-bcfc-4589-aa58-0feed5ac49d4' of type subvolume Oct 5 06:15:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d87b9f30-bcfc-4589-aa58-0feed5ac49d4", "force": true, "format": "json"}]: dispatch Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d87b9f30-bcfc-4589-aa58-0feed5ac49d4'' moved to trashcan Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d87b9f30-bcfc-4589-aa58-0feed5ac49d4, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "format": "json"}]: dispatch Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:48 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:15:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:15:49 localhost podman[339838]: 2025-10-05 10:15:49.922704607 +0000 UTC m=+0.083800408 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:15:49 localhost podman[339838]: 2025-10-05 10:15:49.931993928 +0000 UTC m=+0.093089709 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:15:49 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:15:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 94 KiB/s wr, 7 op/s Oct 5 06:15:50 localhost podman[339837]: 2025-10-05 10:15:50.025997718 +0000 UTC m=+0.190114790 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Oct 5 06:15:50 localhost podman[339837]: 2025-10-05 10:15:50.040214991 +0000 UTC m=+0.204332053 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:15:50 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:15:51 localhost nova_compute[297130]: 2025-10-05 10:15:51.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd28b1c6-c804-4359-bc83-e4de454ba433", "format": "json"}]: dispatch Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cd28b1c6-c804-4359-bc83-e4de454ba433, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cd28b1c6-c804-4359-bc83-e4de454ba433, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:51 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:51.429+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd28b1c6-c804-4359-bc83-e4de454ba433' of type subvolume Oct 5 06:15:51 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd28b1c6-c804-4359-bc83-e4de454ba433' of type subvolume Oct 5 06:15:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd28b1c6-c804-4359-bc83-e4de454ba433", "force": true, "format": "json"}]: dispatch Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd28b1c6-c804-4359-bc83-e4de454ba433, vol_name:cephfs) < "" Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cd28b1c6-c804-4359-bc83-e4de454ba433'' moved to trashcan Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd28b1c6-c804-4359-bc83-e4de454ba433, vol_name:cephfs) < "" Oct 5 06:15:51 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "c9d3a0bb-e4dd-4439-ae16-d04abb34e265", "format": "json"}]: dispatch Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c9d3a0bb-e4dd-4439-ae16-d04abb34e265, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:51 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c9d3a0bb-e4dd-4439-ae16-d04abb34e265, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:15:51 localhost podman[339879]: 2025-10-05 10:15:51.910021992 +0000 UTC m=+0.078185756 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers) Oct 5 06:15:51 localhost podman[339879]: 2025-10-05 10:15:51.924176133 +0000 UTC m=+0.092339887 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Oct 5 06:15:51 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:15:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 624 B/s rd, 77 KiB/s wr, 6 op/s Oct 5 06:15:52 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e268 e268: 6 total, 6 up, 6 in Oct 5 06:15:52 localhost nova_compute[297130]: 2025-10-05 10:15:52.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 83 KiB/s wr, 4 op/s Oct 5 06:15:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c384eea-2cd7-4d7b-9aa0-3846d544271b", "format": "json"}]: dispatch Oct 5 06:15:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:54 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:54.722+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c384eea-2cd7-4d7b-9aa0-3846d544271b' of type subvolume Oct 5 06:15:54 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c384eea-2cd7-4d7b-9aa0-3846d544271b' of type subvolume Oct 5 06:15:54 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c384eea-2cd7-4d7b-9aa0-3846d544271b", "force": true, "format": "json"}]: dispatch Oct 5 06:15:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, vol_name:cephfs) < "" Oct 5 06:15:54 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8c384eea-2cd7-4d7b-9aa0-3846d544271b'' moved to trashcan Oct 5 06:15:54 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:54 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c384eea-2cd7-4d7b-9aa0-3846d544271b, vol_name:cephfs) < "" Oct 5 06:15:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "a216414e-bdf3-406c-babe-761a23023961", "format": "json"}]: dispatch Oct 5 06:15:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a216414e-bdf3-406c-babe-761a23023961, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a216414e-bdf3-406c-babe-761a23023961, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:15:55 localhost systemd[1]: tmp-crun.PWxx9f.mount: Deactivated successfully. Oct 5 06:15:55 localhost podman[339898]: 2025-10-05 10:15:55.920974218 +0000 UTC m=+0.089875171 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:15:55 localhost podman[339898]: 2025-10-05 10:15:55.925680775 +0000 UTC m=+0.094581728 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Oct 5 06:15:55 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:15:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 83 KiB/s wr, 4 op/s Oct 5 06:15:56 localhost podman[248157]: time="2025-10-05T10:15:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:15:56 localhost podman[248157]: @ - - [05/Oct/2025:10:15:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:15:56 localhost podman[248157]: @ - - [05/Oct/2025:10:15:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19381 "" "Go-http-client/1.1" Oct 5 06:15:56 localhost nova_compute[297130]: 2025-10-05 10:15:56.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:57 localhost nova_compute[297130]: 2025-10-05 10:15:57.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:15:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ce28d532-cf90-4a5c-a702-aa61974c6ec9", "format": "json"}]: dispatch Oct 5 06:15:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:15:57 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:15:57.918+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce28d532-cf90-4a5c-a702-aa61974c6ec9' of type subvolume Oct 5 06:15:57 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce28d532-cf90-4a5c-a702-aa61974c6ec9' of type subvolume Oct 5 06:15:57 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ce28d532-cf90-4a5c-a702-aa61974c6ec9", "force": true, "format": "json"}]: dispatch Oct 5 06:15:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, vol_name:cephfs) < "" Oct 5 06:15:57 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ce28d532-cf90-4a5c-a702-aa61974c6ec9'' moved to trashcan Oct 5 06:15:57 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:15:57 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce28d532-cf90-4a5c-a702-aa61974c6ec9, vol_name:cephfs) < "" Oct 5 06:15:57 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 77 KiB/s wr, 4 op/s Oct 5 06:15:58 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "a216414e-bdf3-406c-babe-761a23023961_94e060cd-f27e-4ddb-b2f2-9a63b9ad053d", "force": true, "format": "json"}]: dispatch Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a216414e-bdf3-406c-babe-761a23023961_94e060cd-f27e-4ddb-b2f2-9a63b9ad053d, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a216414e-bdf3-406c-babe-761a23023961_94e060cd-f27e-4ddb-b2f2-9a63b9ad053d, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:58 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "a216414e-bdf3-406c-babe-761a23023961", "force": true, "format": "json"}]: dispatch Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a216414e-bdf3-406c-babe-761a23023961, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:15:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a216414e-bdf3-406c-babe-761a23023961, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:15:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:15:59 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 77 KiB/s wr, 4 op/s Oct 5 06:16:01 localhost nova_compute[297130]: 2025-10-05 10:16:01.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:01 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 77 KiB/s wr, 4 op/s Oct 5 06:16:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "af3a3f41-6c13-4e4b-86e8-c9889d02aea5", "format": "json"}]: dispatch Oct 5 06:16:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:af3a3f41-6c13-4e4b-86e8-c9889d02aea5, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:af3a3f41-6c13-4e4b-86e8-c9889d02aea5, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:02 localhost nova_compute[297130]: 2025-10-05 10:16:02.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:16:02.743 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:16:02 localhost nova_compute[297130]: 2025-10-05 10:16:02.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:02 localhost ovn_metadata_agent[163196]: 2025-10-05 10:16:02.745 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:16:03 localhost ovn_metadata_agent[163196]: 2025-10-05 10:16:03.747 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:16:03 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 865 B/s rd, 96 KiB/s wr, 6 op/s Oct 5 06:16:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:05 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e269 e269: 6 total, 6 up, 6 in Oct 5 06:16:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:16:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:16:05 localhost podman[339916]: 2025-10-05 10:16:05.911988789 +0000 UTC m=+0.082395300 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:16:05 localhost podman[339916]: 2025-10-05 10:16:05.923311954 +0000 UTC m=+0.093718465 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3) Oct 5 06:16:05 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:16:05 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 68 KiB/s wr, 5 op/s Oct 5 06:16:06 localhost podman[339917]: 2025-10-05 10:16:06.017943742 +0000 UTC m=+0.184002106 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:16:06 localhost podman[339917]: 2025-10-05 10:16:06.029120173 +0000 UTC m=+0.195178527 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Oct 5 06:16:06 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:16:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "af3a3f41-6c13-4e4b-86e8-c9889d02aea5_b8803cff-3ab7-40d9-ba77-7075d2447222", "force": true, "format": "json"}]: dispatch Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:af3a3f41-6c13-4e4b-86e8-c9889d02aea5_b8803cff-3ab7-40d9-ba77-7075d2447222, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:af3a3f41-6c13-4e4b-86e8-c9889d02aea5_b8803cff-3ab7-40d9-ba77-7075d2447222, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:06 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "af3a3f41-6c13-4e4b-86e8-c9889d02aea5", "force": true, "format": "json"}]: dispatch Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:af3a3f41-6c13-4e4b-86e8-c9889d02aea5, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:06 localhost nova_compute[297130]: 2025-10-05 10:16:06.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:06 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:af3a3f41-6c13-4e4b-86e8-c9889d02aea5, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:07 localhost nova_compute[297130]: 2025-10-05 10:16:07.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:07 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 69 KiB/s wr, 4 op/s Oct 5 06:16:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:09 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "f41d1714-f38f-43be-9fe5-72fb7359cfd1", "format": "json"}]: dispatch Oct 5 06:16:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f41d1714-f38f-43be-9fe5-72fb7359cfd1, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:09 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f41d1714-f38f-43be-9fe5-72fb7359cfd1, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:16:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:16:09 localhost systemd[1]: tmp-crun.ivAqe0.mount: Deactivated successfully. Oct 5 06:16:09 localhost podman[339958]: 2025-10-05 10:16:09.902363861 +0000 UTC m=+0.070313655 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:16:09 localhost podman[339958]: 2025-10-05 10:16:09.939151892 +0000 UTC m=+0.107101686 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:16:09 localhost systemd[1]: tmp-crun.VwpZPY.mount: Deactivated successfully. Oct 5 06:16:09 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:16:09 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 69 KiB/s wr, 4 op/s Oct 5 06:16:10 localhost podman[339957]: 2025-10-05 10:16:09.950286301 +0000 UTC m=+0.122392846 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:16:10 localhost podman[339957]: 2025-10-05 10:16:10.031373884 +0000 UTC m=+0.203480479 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=iscsid) Oct 5 06:16:10 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:16:11 localhost nova_compute[297130]: 2025-10-05 10:16:11.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:16:11 Oct 5 06:16:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:16:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:16:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'volumes', 'images', 'vms', 'backups', 'manila_data'] Oct 5 06:16:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:16:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:16:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:16:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:16:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:16:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:16:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:16:11 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 69 KiB/s wr, 4 op/s Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:16:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002094620236599665 of space, bias 4.0, pg target 1.6673177083333333 quantized to 16 (current 16) Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:16:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:16:12 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e270 e270: 6 total, 6 up, 6 in Oct 5 06:16:12 localhost nova_compute[297130]: 2025-10-05 10:16:12.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:13 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e271 e271: 6 total, 6 up, 6 in Oct 5 06:16:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "f41d1714-f38f-43be-9fe5-72fb7359cfd1_3c56e1d0-3d17-41ec-95ea-b57aab72054c", "force": true, "format": "json"}]: dispatch Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f41d1714-f38f-43be-9fe5-72fb7359cfd1_3c56e1d0-3d17-41ec-95ea-b57aab72054c, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f41d1714-f38f-43be-9fe5-72fb7359cfd1_3c56e1d0-3d17-41ec-95ea-b57aab72054c, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "f41d1714-f38f-43be-9fe5-72fb7359cfd1", "force": true, "format": "json"}]: dispatch Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f41d1714-f38f-43be-9fe5-72fb7359cfd1, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f41d1714-f38f-43be-9fe5-72fb7359cfd1, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:13 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "c029d669-eb27-4251-ba09-dfb758aec4ed", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:c029d669-eb27-4251-ba09-dfb758aec4ed, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:13 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:c029d669-eb27-4251-ba09-dfb758aec4ed, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:13 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 58 KiB/s wr, 3 op/s Oct 5 06:16:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:15 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e272 e272: 6 total, 6 up, 6 in Oct 5 06:16:15 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 23 KiB/s wr, 1 op/s Oct 5 06:16:16 localhost nova_compute[297130]: 2025-10-05 10:16:16.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:16 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058", "format": "json"}]: dispatch Oct 5 06:16:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:16 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:16 localhost openstack_network_exporter[250246]: ERROR 10:16:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:16 localhost openstack_network_exporter[250246]: ERROR 10:16:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:16 localhost openstack_network_exporter[250246]: ERROR 10:16:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:16:16 localhost openstack_network_exporter[250246]: ERROR 10:16:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:16:16 localhost openstack_network_exporter[250246]: Oct 5 06:16:16 localhost openstack_network_exporter[250246]: ERROR 10:16:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:16:16 localhost openstack_network_exporter[250246]: Oct 5 06:16:17 localhost nova_compute[297130]: 2025-10-05 10:16:17.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:17 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 96 KiB/s wr, 6 op/s Oct 5 06:16:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "c029d669-eb27-4251-ba09-dfb758aec4ed", "snap_name": "3ba7d823-f92e-4310-bf80-dbedef81e122", "force": true, "format": "json"}]: dispatch Oct 5 06:16:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:c029d669-eb27-4251-ba09-dfb758aec4ed, prefix:fs subvolumegroup snapshot rm, snap_name:3ba7d823-f92e-4310-bf80-dbedef81e122, vol_name:cephfs) < "" Oct 5 06:16:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:c029d669-eb27-4251-ba09-dfb758aec4ed, prefix:fs subvolumegroup snapshot rm, snap_name:3ba7d823-f92e-4310-bf80-dbedef81e122, vol_name:cephfs) < "" Oct 5 06:16:19 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "c029d669-eb27-4251-ba09-dfb758aec4ed", "force": true, "format": "json"}]: dispatch Oct 5 06:16:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c029d669-eb27-4251-ba09-dfb758aec4ed, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:19 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c029d669-eb27-4251-ba09-dfb758aec4ed, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:19 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 523 B/s rd, 74 KiB/s wr, 4 op/s Oct 5 06:16:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:16:20.411 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:16:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:16:20.412 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:16:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:16:20.412 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:16:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:16:20 localhost podman[340003]: 2025-10-05 10:16:20.917963331 +0000 UTC m=+0.085008151 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3) Oct 5 06:16:20 localhost podman[340003]: 2025-10-05 10:16:20.930135888 +0000 UTC m=+0.097180738 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3) Oct 5 06:16:20 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:16:21 localhost systemd[1]: tmp-crun.w0hXxv.mount: Deactivated successfully. Oct 5 06:16:21 localhost podman[340004]: 2025-10-05 10:16:21.016810992 +0000 UTC m=+0.181141159 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Oct 5 06:16:21 localhost podman[340004]: 2025-10-05 10:16:21.028163908 +0000 UTC m=+0.192494105 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Oct 5 06:16:21 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:16:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058_e990e775-4098-4b20-bd62-6c7daa9e15ce", "force": true, "format": "json"}]: dispatch Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058_e990e775-4098-4b20-bd62-6c7daa9e15ce, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:21 localhost nova_compute[297130]: 2025-10-05 10:16:21.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058_e990e775-4098-4b20-bd62-6c7daa9e15ce, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058", "force": true, "format": "json"}]: dispatch Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d7f16bdd-753f-4ea8-8bdf-adfc0c4aa058, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:21 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 232 B/s rd, 49 KiB/s wr, 2 op/s Oct 5 06:16:22 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e273 e273: 6 total, 6 up, 6 in Oct 5 06:16:22 localhost nova_compute[297130]: 2025-10-05 10:16:22.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:16:22 localhost podman[340044]: 2025-10-05 10:16:22.923787384 +0000 UTC m=+0.090435717 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Oct 5 06:16:22 localhost podman[340044]: 2025-10-05 10:16:22.934570463 +0000 UTC m=+0.101218766 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm) Oct 5 06:16:22 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "c776779b-adf0-44ce-b29b-7e84f414c413", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:16:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:22 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:16:22 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:23 localhost nova_compute[297130]: 2025-10-05 10:16:23.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:23 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 234 B/s rd, 79 KiB/s wr, 4 op/s Oct 5 06:16:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:24 localhost nova_compute[297130]: 2025-10-05 10:16:24.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e274 e274: 6 total, 6 up, 6 in Oct 5 06:16:25 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "3ca0574a-d055-4ef6-9682-1f87d95b745c", "format": "json"}]: dispatch Oct 5 06:16:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3ca0574a-d055-4ef6-9682-1f87d95b745c, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:25 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3ca0574a-d055-4ef6-9682-1f87d95b745c, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:25 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s wr, 1 op/s Oct 5 06:16:26 localhost podman[248157]: time="2025-10-05T10:16:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:16:26 localhost podman[248157]: @ - - [05/Oct/2025:10:16:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:16:26 localhost podman[248157]: @ - - [05/Oct/2025:10:16:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19383 "" "Go-http-client/1.1" Oct 5 06:16:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "88d34436-57ca-4abf-b550-a57697ab7867", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "c776779b-adf0-44ce-b29b-7e84f414c413", "format": "json"}]: dispatch Oct 5 06:16:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:88d34436-57ca-4abf-b550-a57697ab7867, vol_name:cephfs) < "" Oct 5 06:16:26 localhost nova_compute[297130]: 2025-10-05 10:16:26.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:26 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/c776779b-adf0-44ce-b29b-7e84f414c413/88d34436-57ca-4abf-b550-a57697ab7867/.meta.tmp' Oct 5 06:16:26 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/c776779b-adf0-44ce-b29b-7e84f414c413/88d34436-57ca-4abf-b550-a57697ab7867/.meta.tmp' to config b'/volumes/c776779b-adf0-44ce-b29b-7e84f414c413/88d34436-57ca-4abf-b550-a57697ab7867/.meta' Oct 5 06:16:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:88d34436-57ca-4abf-b550-a57697ab7867, vol_name:cephfs) < "" Oct 5 06:16:26 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "88d34436-57ca-4abf-b550-a57697ab7867", "group_name": "c776779b-adf0-44ce-b29b-7e84f414c413", "format": "json"}]: dispatch Oct 5 06:16:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs subvolume getpath, sub_name:88d34436-57ca-4abf-b550-a57697ab7867, vol_name:cephfs) < "" Oct 5 06:16:26 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs subvolume getpath, sub_name:88d34436-57ca-4abf-b550-a57697ab7867, vol_name:cephfs) < "" Oct 5 06:16:26 localhost nova_compute[297130]: 2025-10-05 10:16:26.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:26 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:16:26.596 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:16:26Z, description=, device_id=fb92f8ec-bdd4-4da9-a23b-821d48ebc79d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=981e6de1-a69c-4965-a7cb-35ffafbabc28, ip_allocation=immediate, mac_address=fa:16:3e:b4:d4:3a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3952, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:16:26Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:16:26 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:16:26 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:16:26 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:16:26 localhost podman[340081]: 2025-10-05 10:16:26.813873876 +0000 UTC m=+0.062131254 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:16:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:16:26 localhost podman[340095]: 2025-10-05 10:16:26.930041925 +0000 UTC m=+0.088099944 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:16:26 localhost podman[340095]: 2025-10-05 10:16:26.963254078 +0000 UTC m=+0.121312057 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Oct 5 06:16:26 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:16:27 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:16:27.107 271653 INFO neutron.agent.dhcp.agent [None req-9734a98d-e65d-49e4-ae60-931c8227394f - - - - - -] DHCP configuration for ports {'981e6de1-a69c-4965-a7cb-35ffafbabc28'} is completed#033[00m Oct 5 06:16:27 localhost nova_compute[297130]: 2025-10-05 10:16:27.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:27 localhost nova_compute[297130]: 2025-10-05 10:16:27.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:27 localhost nova_compute[297130]: 2025-10-05 10:16:27.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:27 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 65 KiB/s wr, 4 op/s Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.305 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.305 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.306 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.306 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.306 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:16:28 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:16:28 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/764758692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.800 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.982 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.984 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11453MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.984 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:16:28 localhost nova_compute[297130]: 2025-10-05 10:16:28.984 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.055 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.055 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:16:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.093 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:16:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:16:29 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3859672553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.523 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.528 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.544 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.545 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.545 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:16:29 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "88d34436-57ca-4abf-b550-a57697ab7867", "group_name": "c776779b-adf0-44ce-b29b-7e84f414c413", "format": "json"}]: dispatch Oct 5 06:16:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:88d34436-57ca-4abf-b550-a57697ab7867, format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:88d34436-57ca-4abf-b550-a57697ab7867, format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:29 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:16:29.557+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '88d34436-57ca-4abf-b550-a57697ab7867' of type subvolume Oct 5 06:16:29 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '88d34436-57ca-4abf-b550-a57697ab7867' of type subvolume Oct 5 06:16:29 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "88d34436-57ca-4abf-b550-a57697ab7867", "force": true, "group_name": "c776779b-adf0-44ce-b29b-7e84f414c413", "format": "json"}]: dispatch Oct 5 06:16:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs subvolume rm, sub_name:88d34436-57ca-4abf-b550-a57697ab7867, vol_name:cephfs) < "" Oct 5 06:16:29 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/c776779b-adf0-44ce-b29b-7e84f414c413/88d34436-57ca-4abf-b550-a57697ab7867'' moved to trashcan Oct 5 06:16:29 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:16:29 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs subvolume rm, sub_name:88d34436-57ca-4abf-b550-a57697ab7867, vol_name:cephfs) < "" Oct 5 06:16:29 localhost nova_compute[297130]: 2025-10-05 10:16:29.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:29 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 65 KiB/s wr, 4 op/s Oct 5 06:16:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "3ca0574a-d055-4ef6-9682-1f87d95b745c_3af366d2-bd5d-4f3c-b2c1-a190858db104", "force": true, "format": "json"}]: dispatch Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ca0574a-d055-4ef6-9682-1f87d95b745c_3af366d2-bd5d-4f3c-b2c1-a190858db104, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ca0574a-d055-4ef6-9682-1f87d95b745c_3af366d2-bd5d-4f3c-b2c1-a190858db104, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:30 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "3ca0574a-d055-4ef6-9682-1f87d95b745c", "force": true, "format": "json"}]: dispatch Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ca0574a-d055-4ef6-9682-1f87d95b745c, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:30 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ca0574a-d055-4ef6-9682-1f87d95b745c, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:30 localhost nova_compute[297130]: 2025-10-05 10:16:30.542 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:30 localhost nova_compute[297130]: 2025-10-05 10:16:30.542 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:30 localhost nova_compute[297130]: 2025-10-05 10:16:30.543 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:16:30 localhost nova_compute[297130]: 2025-10-05 10:16:30.543 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:16:30 localhost nova_compute[297130]: 2025-10-05 10:16:30.561 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:16:31 localhost nova_compute[297130]: 2025-10-05 10:16:31.271 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:31 localhost nova_compute[297130]: 2025-10-05 10:16:31.272 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:16:31 localhost nova_compute[297130]: 2025-10-05 10:16:31.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:31 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 208 B/s rd, 53 KiB/s wr, 3 op/s Oct 5 06:16:32 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e275 e275: 6 total, 6 up, 6 in Oct 5 06:16:32 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "c776779b-adf0-44ce-b29b-7e84f414c413", "force": true, "format": "json"}]: dispatch Oct 5 06:16:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:32 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:c776779b-adf0-44ce-b29b-7e84f414c413, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:32 localhost nova_compute[297130]: 2025-10-05 10:16:32.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:33 localhost nova_compute[297130]: 2025-10-05 10:16:33.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:16:33 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 641 B/s rd, 67 KiB/s wr, 5 op/s Oct 5 06:16:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:34 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:16:34.962 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:16:34Z, description=, device_id=a8d51cec-3c9e-4c31-afd9-1a7e902d28ae, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b02a7638-80ee-428a-8686-71157cadc5ef, ip_allocation=immediate, mac_address=fa:16:3e:d3:bc:f0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3965, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:16:34Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:16:35 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 3 addresses Oct 5 06:16:35 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:16:35 localhost podman[340182]: 2025-10-05 10:16:35.190367888 +0000 UTC m=+0.067158739 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:16:35 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:16:35 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:16:35.388 271653 INFO neutron.agent.dhcp.agent [None req-ad8423bb-9402-4e70-bc05-8ea6ed036253 - - - - - -] DHCP configuration for ports {'b02a7638-80ee-428a-8686-71157cadc5ef'} is completed#033[00m Oct 5 06:16:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "c9d3a0bb-e4dd-4439-ae16-d04abb34e265_3b9afe4b-92be-411f-b991-11bc1859d163", "force": true, "format": "json"}]: dispatch Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c9d3a0bb-e4dd-4439-ae16-d04abb34e265_3b9afe4b-92be-411f-b991-11bc1859d163, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c9d3a0bb-e4dd-4439-ae16-d04abb34e265_3b9afe4b-92be-411f-b991-11bc1859d163, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:35 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "snap_name": "c9d3a0bb-e4dd-4439-ae16-d04abb34e265", "force": true, "format": "json"}]: dispatch Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c9d3a0bb-e4dd-4439-ae16-d04abb34e265, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta.tmp' to config b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc/.meta' Oct 5 06:16:35 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c9d3a0bb-e4dd-4439-ae16-d04abb34e265, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:35 localhost nova_compute[297130]: 2025-10-05 10:16:35.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:35 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 64 KiB/s wr, 5 op/s Oct 5 06:16:36 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:16:36 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:36 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:36 localhost nova_compute[297130]: 2025-10-05 10:16:36.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:16:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:16:36 localhost podman[340206]: 2025-10-05 10:16:36.936388736 +0000 UTC m=+0.102205584 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:16:36 localhost podman[340206]: 2025-10-05 10:16:36.946190879 +0000 UTC m=+0.112007747 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true) Oct 5 06:16:36 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:16:37 localhost systemd[1]: tmp-crun.c4eYOC.mount: Deactivated successfully. Oct 5 06:16:37 localhost podman[340207]: 2025-10-05 10:16:37.031346712 +0000 UTC m=+0.193369519 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:16:37 localhost podman[340207]: 2025-10-05 10:16:37.062887462 +0000 UTC m=+0.224910229 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:16:37 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:16:37 localhost nova_compute[297130]: 2025-10-05 10:16:37.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:37 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 82 KiB/s wr, 4 op/s Oct 5 06:16:38 localhost nova_compute[297130]: 2025-10-05 10:16:38.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.888 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.889 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.890 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.891 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.892 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.892 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.892 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.892 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.892 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.893 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.893 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.893 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.894 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.894 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.894 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.894 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:38 localhost ceilometer_agent_compute[245451]: 2025-10-05 10:16:38.895 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Oct 5 06:16:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "format": "json"}]: dispatch Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:16:39.135+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4762548d-ed2e-4d67-8c85-a038207b6ccc' of type subvolume Oct 5 06:16:39 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4762548d-ed2e-4d67-8c85-a038207b6ccc' of type subvolume Oct 5 06:16:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4762548d-ed2e-4d67-8c85-a038207b6ccc", "force": true, "format": "json"}]: dispatch Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4762548d-ed2e-4d67-8c85-a038207b6ccc'' moved to trashcan Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4762548d-ed2e-4d67-8c85-a038207b6ccc, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a6e13faf-9b4f-4385-afba-d9503d04e724", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6", "format": "json"}]: dispatch Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a6e13faf-9b4f-4385-afba-d9503d04e724, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6/a6e13faf-9b4f-4385-afba-d9503d04e724/.meta.tmp' Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6/a6e13faf-9b4f-4385-afba-d9503d04e724/.meta.tmp' to config b'/volumes/a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6/a6e13faf-9b4f-4385-afba-d9503d04e724/.meta' Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a6e13faf-9b4f-4385-afba-d9503d04e724, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a6e13faf-9b4f-4385-afba-d9503d04e724", "group_name": "a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6", "format": "json"}]: dispatch Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs subvolume getpath, sub_name:a6e13faf-9b4f-4385-afba-d9503d04e724, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs subvolume getpath, sub_name:a6e13faf-9b4f-4385-afba-d9503d04e724, vol_name:cephfs) < "" Oct 5 06:16:39 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 82 KiB/s wr, 4 op/s Oct 5 06:16:40 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e276 e276: 6 total, 6 up, 6 in Oct 5 06:16:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:16:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:16:40 localhost podman[340249]: 2025-10-05 10:16:40.90307315 +0000 UTC m=+0.074246492 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:16:40 localhost podman[340249]: 2025-10-05 10:16:40.911250179 +0000 UTC m=+0.082423521 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:16:40 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:16:40 localhost systemd[1]: tmp-crun.TCMxEU.mount: Deactivated successfully. Oct 5 06:16:40 localhost podman[340250]: 2025-10-05 10:16:40.962425158 +0000 UTC m=+0.129997673 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac) Oct 5 06:16:41 localhost podman[340250]: 2025-10-05 10:16:41.075200805 +0000 UTC m=+0.242773320 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true) Oct 5 06:16:41 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:16:41 localhost nova_compute[297130]: 2025-10-05 10:16:41.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:16:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:16:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:16:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:16:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:16:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:16:41 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 417 B/s rd, 83 KiB/s wr, 4 op/s Oct 5 06:16:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a6e13faf-9b4f-4385-afba-d9503d04e724", "group_name": "a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6", "format": "json"}]: dispatch Oct 5 06:16:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a6e13faf-9b4f-4385-afba-d9503d04e724, format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a6e13faf-9b4f-4385-afba-d9503d04e724, format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:42 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:16:42.733+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a6e13faf-9b4f-4385-afba-d9503d04e724' of type subvolume Oct 5 06:16:42 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a6e13faf-9b4f-4385-afba-d9503d04e724' of type subvolume Oct 5 06:16:42 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a6e13faf-9b4f-4385-afba-d9503d04e724", "force": true, "group_name": "a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6", "format": "json"}]: dispatch Oct 5 06:16:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs subvolume rm, sub_name:a6e13faf-9b4f-4385-afba-d9503d04e724, vol_name:cephfs) < "" Oct 5 06:16:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6/a6e13faf-9b4f-4385-afba-d9503d04e724'' moved to trashcan Oct 5 06:16:42 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:16:42 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs subvolume rm, sub_name:a6e13faf-9b4f-4385-afba-d9503d04e724, vol_name:cephfs) < "" Oct 5 06:16:42 localhost nova_compute[297130]: 2025-10-05 10:16:42.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:43 localhost podman[340449]: Oct 5 06:16:43 localhost podman[340449]: 2025-10-05 10:16:43.341258056 +0000 UTC m=+0.075241107 container create 626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_archimedes, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7) Oct 5 06:16:43 localhost nova_compute[297130]: 2025-10-05 10:16:43.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:43 localhost systemd[1]: Started libpod-conmon-626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd.scope. Oct 5 06:16:43 localhost podman[340466]: 2025-10-05 10:16:43.393657917 +0000 UTC m=+0.065278519 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, io.buildah.version=1.41.3) Oct 5 06:16:43 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:16:43 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:16:43 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:16:43 localhost systemd[1]: Started libcrun container. Oct 5 06:16:43 localhost podman[340449]: 2025-10-05 10:16:43.308969676 +0000 UTC m=+0.042952757 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 06:16:43 localhost podman[340449]: 2025-10-05 10:16:43.418424163 +0000 UTC m=+0.152407204 container init 626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_archimedes, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Oct 5 06:16:43 localhost systemd[1]: tmp-crun.eWhEhO.mount: Deactivated successfully. Oct 5 06:16:43 localhost podman[340449]: 2025-10-05 10:16:43.432884133 +0000 UTC m=+0.166867184 container start 626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_archimedes, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64) Oct 5 06:16:43 localhost podman[340449]: 2025-10-05 10:16:43.433400367 +0000 UTC m=+0.167383448 container attach 626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_archimedes, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., release=553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git) Oct 5 06:16:43 localhost gracious_archimedes[340479]: 167 167 Oct 5 06:16:43 localhost systemd[1]: libpod-626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd.scope: Deactivated successfully. Oct 5 06:16:43 localhost podman[340449]: 2025-10-05 10:16:43.445593935 +0000 UTC m=+0.179577006 container died 626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_archimedes, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, version=7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, distribution-scope=public) Oct 5 06:16:43 localhost podman[340487]: 2025-10-05 10:16:43.548704452 +0000 UTC m=+0.087962919 container remove 626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_archimedes, version=7, ceph=True, io.openshift.expose-services=, name=rhceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-type=git) Oct 5 06:16:43 localhost systemd[1]: libpod-conmon-626c4ddef369131790ce6bfb2ed1a7189d70d97a7208a1cf5346f653fc57bbbd.scope: Deactivated successfully. Oct 5 06:16:43 localhost podman[340514]: Oct 5 06:16:43 localhost podman[340514]: 2025-10-05 10:16:43.77034728 +0000 UTC m=+0.078476554 container create 2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_lederberg, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Oct 5 06:16:43 localhost systemd[1]: Started libpod-conmon-2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0.scope. Oct 5 06:16:43 localhost systemd[1]: Started libcrun container. Oct 5 06:16:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d16667bf388d29e1aca8ea328936067c24290e517ee3deddf807d99c50cf137/merged/rootfs supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d16667bf388d29e1aca8ea328936067c24290e517ee3deddf807d99c50cf137/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d16667bf388d29e1aca8ea328936067c24290e517ee3deddf807d99c50cf137/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d16667bf388d29e1aca8ea328936067c24290e517ee3deddf807d99c50cf137/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Oct 5 06:16:43 localhost podman[340514]: 2025-10-05 10:16:43.740051094 +0000 UTC m=+0.048180388 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Oct 5 06:16:43 localhost podman[340514]: 2025-10-05 10:16:43.841532767 +0000 UTC m=+0.149662031 container init 2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_lederberg, vcs-type=git, maintainer=Guillaume Abrioux , release=553, distribution-scope=public, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Oct 5 06:16:43 localhost podman[340514]: 2025-10-05 10:16:43.852174533 +0000 UTC m=+0.160303797 container start 2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_lederberg, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55) Oct 5 06:16:43 localhost podman[340514]: 2025-10-05 10:16:43.85241621 +0000 UTC m=+0.160545474 container attach 2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_lederberg, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Oct 5 06:16:43 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 84 KiB/s wr, 4 op/s Oct 5 06:16:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:44 localhost systemd[1]: var-lib-containers-storage-overlay-6a63117f301d808e933ba350e44a9a2f9d1054b2547cf184518092bff62450a1-merged.mount: Deactivated successfully. Oct 5 06:16:44 localhost boring_lederberg[340529]: [ Oct 5 06:16:44 localhost boring_lederberg[340529]: { Oct 5 06:16:44 localhost boring_lederberg[340529]: "available": false, Oct 5 06:16:44 localhost boring_lederberg[340529]: "ceph_device": false, Oct 5 06:16:44 localhost boring_lederberg[340529]: "device_id": "QEMU_DVD-ROM_QM00001", Oct 5 06:16:44 localhost boring_lederberg[340529]: "lsm_data": {}, Oct 5 06:16:44 localhost boring_lederberg[340529]: "lvs": [], Oct 5 06:16:44 localhost boring_lederberg[340529]: "path": "/dev/sr0", Oct 5 06:16:44 localhost boring_lederberg[340529]: "rejected_reasons": [ Oct 5 06:16:44 localhost boring_lederberg[340529]: "Insufficient space (<5GB)", Oct 5 06:16:44 localhost boring_lederberg[340529]: "Has a FileSystem" Oct 5 06:16:44 localhost boring_lederberg[340529]: ], Oct 5 06:16:44 localhost boring_lederberg[340529]: "sys_api": { Oct 5 06:16:44 localhost boring_lederberg[340529]: "actuators": null, Oct 5 06:16:44 localhost boring_lederberg[340529]: "device_nodes": "sr0", Oct 5 06:16:44 localhost boring_lederberg[340529]: "human_readable_size": "482.00 KB", Oct 5 06:16:44 localhost boring_lederberg[340529]: "id_bus": "ata", Oct 5 06:16:44 localhost boring_lederberg[340529]: "model": "QEMU DVD-ROM", Oct 5 06:16:44 localhost boring_lederberg[340529]: "nr_requests": "2", Oct 5 06:16:44 localhost boring_lederberg[340529]: "partitions": {}, Oct 5 06:16:44 localhost boring_lederberg[340529]: "path": "/dev/sr0", Oct 5 06:16:44 localhost boring_lederberg[340529]: "removable": "1", Oct 5 06:16:44 localhost boring_lederberg[340529]: "rev": "2.5+", Oct 5 06:16:44 localhost boring_lederberg[340529]: "ro": "0", Oct 5 06:16:44 localhost boring_lederberg[340529]: "rotational": "1", Oct 5 06:16:44 localhost boring_lederberg[340529]: "sas_address": "", Oct 5 06:16:44 localhost boring_lederberg[340529]: "sas_device_handle": "", Oct 5 06:16:44 localhost boring_lederberg[340529]: "scheduler_mode": "mq-deadline", Oct 5 06:16:44 localhost boring_lederberg[340529]: "sectors": 0, Oct 5 06:16:44 localhost boring_lederberg[340529]: "sectorsize": "2048", Oct 5 06:16:44 localhost boring_lederberg[340529]: "size": 493568.0, Oct 5 06:16:44 localhost boring_lederberg[340529]: "support_discard": "0", Oct 5 06:16:44 localhost boring_lederberg[340529]: "type": "disk", Oct 5 06:16:44 localhost boring_lederberg[340529]: "vendor": "QEMU" Oct 5 06:16:44 localhost boring_lederberg[340529]: } Oct 5 06:16:44 localhost boring_lederberg[340529]: } Oct 5 06:16:44 localhost boring_lederberg[340529]: ] Oct 5 06:16:44 localhost systemd[1]: libpod-2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0.scope: Deactivated successfully. Oct 5 06:16:44 localhost systemd[1]: libpod-2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0.scope: Consumed 1.123s CPU time. Oct 5 06:16:44 localhost podman[340514]: 2025-10-05 10:16:44.969036649 +0000 UTC m=+1.277165923 container died 2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_lederberg, io.k8s.description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7) Oct 5 06:16:45 localhost systemd[1]: tmp-crun.GRSwwz.mount: Deactivated successfully. Oct 5 06:16:45 localhost systemd[1]: var-lib-containers-storage-overlay-7d16667bf388d29e1aca8ea328936067c24290e517ee3deddf807d99c50cf137-merged.mount: Deactivated successfully. Oct 5 06:16:45 localhost podman[342666]: 2025-10-05 10:16:45.074655983 +0000 UTC m=+0.094095685 container remove 2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_lederberg, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.33.12, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Oct 5 06:16:45 localhost systemd[1]: libpod-conmon-2c23ef68bd500dbadb3e88f2877b01ee56d496109b6075945d235d3f7373acf0.scope: Deactivated successfully. Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain.devices.0}] v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471152.localdomain}] v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain.devices.0}] v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain.devices.0}] v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471151.localdomain}] v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005471150.localdomain}] v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:16:45 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev be3c9f9c-a58c-48c1-ad0b-c1983998b20c (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:16:45 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev be3c9f9c-a58c-48c1-ad0b-c1983998b20c (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:16:45 localhost ceph-mgr[301363]: [progress INFO root] Completed event be3c9f9c-a58c-48c1-ad0b-c1983998b20c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:16:45 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:16:45 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:16:45 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6", "force": true, "format": "json"}]: dispatch Oct 5 06:16:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:45 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a1ac2d61-c5bb-42ef-ac26-fd2c4f7e9dc6, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:45 localhost nova_compute[297130]: 2025-10-05 10:16:45.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:45 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 84 KiB/s wr, 4 op/s Oct 5 06:16:46 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:16:46 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:16:46 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:16:46 localhost podman[342713]: 2025-10-05 10:16:46.01889182 +0000 UTC m=+0.048179459 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001) Oct 5 06:16:46 localhost nova_compute[297130]: 2025-10-05 10:16:46.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:46 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:16:46 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:46 localhost openstack_network_exporter[250246]: ERROR 10:16:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:16:46 localhost openstack_network_exporter[250246]: ERROR 10:16:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:46 localhost openstack_network_exporter[250246]: ERROR 10:16:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:16:46 localhost openstack_network_exporter[250246]: ERROR 10:16:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:16:46 localhost openstack_network_exporter[250246]: Oct 5 06:16:46 localhost openstack_network_exporter[250246]: ERROR 10:16:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:16:46 localhost openstack_network_exporter[250246]: Oct 5 06:16:47 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:16:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:16:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 e277: 6 total, 6 up, 6 in Oct 5 06:16:47 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:16:47 localhost nova_compute[297130]: 2025-10-05 10:16:47.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:47 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 78 KiB/s wr, 5 op/s Oct 5 06:16:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:49 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "73cf1a00-2ccf-4d16-b6ad-b868d491120d", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:16:49 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:49 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:16:49 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 630 B/s rd, 64 KiB/s wr, 4 op/s Oct 5 06:16:51 localhost nova_compute[297130]: 2025-10-05 10:16:51.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:16:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:16:51 localhost systemd[1]: tmp-crun.XBBUnc.mount: Deactivated successfully. Oct 5 06:16:51 localhost podman[342734]: 2025-10-05 10:16:51.929719606 +0000 UTC m=+0.097083796 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Oct 5 06:16:51 localhost podman[342735]: 2025-10-05 10:16:51.974473492 +0000 UTC m=+0.138166944 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:16:51 localhost podman[342735]: 2025-10-05 10:16:51.983223157 +0000 UTC m=+0.146916609 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Oct 5 06:16:51 localhost podman[342734]: 2025-10-05 10:16:51.994263614 +0000 UTC m=+0.161627854 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:16:51 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 62 KiB/s wr, 4 op/s Oct 5 06:16:51 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:16:52 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:16:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:16:52.029 271653 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-05T10:16:51Z, description=, device_id=1161fa57-7dbf-488b-be13-cd186ebac0e1, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7570bf47-7b5b-403c-921a-a6706dd28f3b, ip_allocation=immediate, mac_address=fa:16:3e:86:f9:05, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-05T08:29:27Z, description=, dns_domain=, id=cda0aa48-2690-46e0-99f3-e1922fca64be, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=8b36437b65444bcdac75beef77b6981e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['c1f0b3ee-865f-4e87-a3b0-59949ea9e258'], tags=[], tenant_id=8b36437b65444bcdac75beef77b6981e, updated_at=2025-10-05T08:29:33Z, vlan_transparent=None, network_id=cda0aa48-2690-46e0-99f3-e1922fca64be, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3993, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-05T10:16:51Z on network cda0aa48-2690-46e0-99f3-e1922fca64be#033[00m Oct 5 06:16:52 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 2 addresses Oct 5 06:16:52 localhost podman[342795]: 2025-10-05 10:16:52.245822039 +0000 UTC m=+0.062206527 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:16:52 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:16:52 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:16:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "37aa987a-4c1d-4818-90de-2eeabb9c3786", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "73cf1a00-2ccf-4d16-b6ad-b868d491120d", "format": "json"}]: dispatch Oct 5 06:16:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, vol_name:cephfs) < "" Oct 5 06:16:52 localhost neutron_dhcp_agent[271649]: 2025-10-05 10:16:52.443 271653 INFO neutron.agent.dhcp.agent [None req-0663b0ac-81c3-4bb3-819a-f066ee2ae663 - - - - - -] DHCP configuration for ports {'7570bf47-7b5b-403c-921a-a6706dd28f3b'} is completed#033[00m Oct 5 06:16:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/73cf1a00-2ccf-4d16-b6ad-b868d491120d/37aa987a-4c1d-4818-90de-2eeabb9c3786/.meta.tmp' Oct 5 06:16:52 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/73cf1a00-2ccf-4d16-b6ad-b868d491120d/37aa987a-4c1d-4818-90de-2eeabb9c3786/.meta.tmp' to config b'/volumes/73cf1a00-2ccf-4d16-b6ad-b868d491120d/37aa987a-4c1d-4818-90de-2eeabb9c3786/.meta' Oct 5 06:16:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, vol_name:cephfs) < "" Oct 5 06:16:52 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "37aa987a-4c1d-4818-90de-2eeabb9c3786", "group_name": "73cf1a00-2ccf-4d16-b6ad-b868d491120d", "format": "json"}]: dispatch Oct 5 06:16:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs subvolume getpath, sub_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, vol_name:cephfs) < "" Oct 5 06:16:52 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs subvolume getpath, sub_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, vol_name:cephfs) < "" Oct 5 06:16:52 localhost nova_compute[297130]: 2025-10-05 10:16:52.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:16:53 localhost podman[342817]: 2025-10-05 10:16:53.917713149 +0000 UTC m=+0.080759486 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9) Oct 5 06:16:53 localhost podman[342817]: 2025-10-05 10:16:53.930196976 +0000 UTC m=+0.093243303 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6) Oct 5 06:16:53 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:16:53 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 59 KiB/s wr, 3 op/s Oct 5 06:16:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:16:54 localhost nova_compute[297130]: 2025-10-05 10:16:54.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "37aa987a-4c1d-4818-90de-2eeabb9c3786", "group_name": "73cf1a00-2ccf-4d16-b6ad-b868d491120d", "format": "json"}]: dispatch Oct 5 06:16:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:16:55 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:16:55.728+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '37aa987a-4c1d-4818-90de-2eeabb9c3786' of type subvolume Oct 5 06:16:55 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '37aa987a-4c1d-4818-90de-2eeabb9c3786' of type subvolume Oct 5 06:16:55 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "37aa987a-4c1d-4818-90de-2eeabb9c3786", "force": true, "group_name": "73cf1a00-2ccf-4d16-b6ad-b868d491120d", "format": "json"}]: dispatch Oct 5 06:16:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs subvolume rm, sub_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, vol_name:cephfs) < "" Oct 5 06:16:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/73cf1a00-2ccf-4d16-b6ad-b868d491120d/37aa987a-4c1d-4818-90de-2eeabb9c3786'' moved to trashcan Oct 5 06:16:55 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:16:55 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs subvolume rm, sub_name:37aa987a-4c1d-4818-90de-2eeabb9c3786, vol_name:cephfs) < "" Oct 5 06:16:55 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 59 KiB/s wr, 3 op/s Oct 5 06:16:56 localhost podman[248157]: time="2025-10-05T10:16:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:16:56 localhost podman[248157]: @ - - [05/Oct/2025:10:16:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:16:56 localhost podman[248157]: @ - - [05/Oct/2025:10:16:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19385 "" "Go-http-client/1.1" Oct 5 06:16:56 localhost nova_compute[297130]: 2025-10-05 10:16:56.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:16:57 localhost podman[342837]: 2025-10-05 10:16:57.350165849 +0000 UTC m=+0.081248109 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:16:57 localhost podman[342837]: 2025-10-05 10:16:57.360170688 +0000 UTC m=+0.091252918 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001) Oct 5 06:16:57 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:16:57 localhost nova_compute[297130]: 2025-10-05 10:16:57.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:58 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 284 B/s rd, 70 KiB/s wr, 3 op/s Oct 5 06:16:58 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "73cf1a00-2ccf-4d16-b6ad-b868d491120d", "force": true, "format": "json"}]: dispatch Oct 5 06:16:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:58 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:73cf1a00-2ccf-4d16-b6ad-b868d491120d, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:16:58 localhost dnsmasq[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/addn_hosts - 1 addresses Oct 5 06:16:58 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/host Oct 5 06:16:58 localhost podman[342872]: 2025-10-05 10:16:58.910270039 +0000 UTC m=+0.059087842 container kill 8f140fef3f5004a88a30029459fe2c7e26c6138a8959d9ad63aef877d76d1053 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda0aa48-2690-46e0-99f3-e1922fca64be, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Oct 5 06:16:58 localhost dnsmasq-dhcp[325876]: read /var/lib/neutron/dhcp/cda0aa48-2690-46e0-99f3-e1922fca64be/opts Oct 5 06:16:59 localhost nova_compute[297130]: 2025-10-05 10:16:59.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:16:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:00 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 44 KiB/s wr, 2 op/s Oct 5 06:17:01 localhost nova_compute[297130]: 2025-10-05 10:17:01.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:02 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 44 KiB/s wr, 2 op/s Oct 5 06:17:02 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "4f9d4e4b-92db-4781-a13f-95b01a21c2ce", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:17:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:17:02 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:17:02 localhost nova_compute[297130]: 2025-10-05 10:17:02.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:04 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 65 KiB/s wr, 3 op/s Oct 5 06:17:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a235631-9258-4d23-97d3-241838969bba", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "group_name": "4f9d4e4b-92db-4781-a13f-95b01a21c2ce", "format": "json"}]: dispatch Oct 5 06:17:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a235631-9258-4d23-97d3-241838969bba, vol_name:cephfs) < "" Oct 5 06:17:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 183 bytes to config b'/volumes/4f9d4e4b-92db-4781-a13f-95b01a21c2ce/7a235631-9258-4d23-97d3-241838969bba/.meta.tmp' Oct 5 06:17:05 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/4f9d4e4b-92db-4781-a13f-95b01a21c2ce/7a235631-9258-4d23-97d3-241838969bba/.meta.tmp' to config b'/volumes/4f9d4e4b-92db-4781-a13f-95b01a21c2ce/7a235631-9258-4d23-97d3-241838969bba/.meta' Oct 5 06:17:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a235631-9258-4d23-97d3-241838969bba, vol_name:cephfs) < "" Oct 5 06:17:05 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a235631-9258-4d23-97d3-241838969bba", "group_name": "4f9d4e4b-92db-4781-a13f-95b01a21c2ce", "format": "json"}]: dispatch Oct 5 06:17:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs subvolume getpath, sub_name:7a235631-9258-4d23-97d3-241838969bba, vol_name:cephfs) < "" Oct 5 06:17:05 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs subvolume getpath, sub_name:7a235631-9258-4d23-97d3-241838969bba, vol_name:cephfs) < "" Oct 5 06:17:06 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 36 KiB/s wr, 1 op/s Oct 5 06:17:06 localhost nova_compute[297130]: 2025-10-05 10:17:06.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:17:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:17:07 localhost systemd[1]: tmp-crun.bzLtoy.mount: Deactivated successfully. Oct 5 06:17:07 localhost podman[342892]: 2025-10-05 10:17:07.919214943 +0000 UTC m=+0.087191799 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 06:17:07 localhost podman[342892]: 2025-10-05 10:17:07.930890799 +0000 UTC m=+0.098867615 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:17:07 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:17:08 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s Oct 5 06:17:08 localhost nova_compute[297130]: 2025-10-05 10:17:08.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:08 localhost systemd[1]: tmp-crun.JBJm5S.mount: Deactivated successfully. Oct 5 06:17:08 localhost podman[342893]: 2025-10-05 10:17:08.027112509 +0000 UTC m=+0.190842280 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:17:08 localhost podman[342893]: 2025-10-05 10:17:08.035833554 +0000 UTC m=+0.199563385 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Oct 5 06:17:08 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:17:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a235631-9258-4d23-97d3-241838969bba", "group_name": "4f9d4e4b-92db-4781-a13f-95b01a21c2ce", "format": "json"}]: dispatch Oct 5 06:17:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a235631-9258-4d23-97d3-241838969bba, format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:17:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a235631-9258-4d23-97d3-241838969bba, format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs clone status, vol_name:cephfs) < "" Oct 5 06:17:08 localhost ceph-659062ac-50b4-5607-b699-3105da7f55ee-mgr-np0005471152-kbhlus[301345]: 2025-10-05T10:17:08.766+0000 7f417fc90640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a235631-9258-4d23-97d3-241838969bba' of type subvolume Oct 5 06:17:08 localhost ceph-mgr[301363]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a235631-9258-4d23-97d3-241838969bba' of type subvolume Oct 5 06:17:08 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a235631-9258-4d23-97d3-241838969bba", "force": true, "group_name": "4f9d4e4b-92db-4781-a13f-95b01a21c2ce", "format": "json"}]: dispatch Oct 5 06:17:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs subvolume rm, sub_name:7a235631-9258-4d23-97d3-241838969bba, vol_name:cephfs) < "" Oct 5 06:17:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/4f9d4e4b-92db-4781-a13f-95b01a21c2ce/7a235631-9258-4d23-97d3-241838969bba'' moved to trashcan Oct 5 06:17:08 localhost ceph-mgr[301363]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Oct 5 06:17:08 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs subvolume rm, sub_name:7a235631-9258-4d23-97d3-241838969bba, vol_name:cephfs) < "" Oct 5 06:17:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:10 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 34 KiB/s wr, 2 op/s Oct 5 06:17:11 localhost nova_compute[297130]: 2025-10-05 10:17:11.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:17:11 Oct 5 06:17:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:17:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:17:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_data', '.mgr', 'volumes', 'images', 'manila_metadata', 'backups', 'vms'] Oct 5 06:17:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:17:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "4f9d4e4b-92db-4781-a13f-95b01a21c2ce", "force": true, "format": "json"}]: dispatch Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:4f9d4e4b-92db-4781-a13f-95b01a21c2ce, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:17:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:17:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:17:11 localhost podman[342930]: 2025-10-05 10:17:11.905739443 +0000 UTC m=+0.075484234 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:17:11 localhost podman[342930]: 2025-10-05 10:17:11.915225159 +0000 UTC m=+0.084969920 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:17:11 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:17:11 localhost systemd[1]: tmp-crun.ihEnJU.mount: Deactivated successfully. Oct 5 06:17:11 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "33e84d8e-ab85-4d69-ac01-1fc46cfecbc1", "mode": "0755", "format": "json"}]: dispatch Oct 5 06:17:11 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:33e84d8e-ab85-4d69-ac01-1fc46cfecbc1, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:17:11 localhost podman[342931]: 2025-10-05 10:17:11.968224456 +0000 UTC m=+0.132013576 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Oct 5 06:17:12 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:33e84d8e-ab85-4d69-ac01-1fc46cfecbc1, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Oct 5 06:17:12 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 34 KiB/s wr, 2 op/s Oct 5 06:17:12 localhost podman[342931]: 2025-10-05 10:17:12.00589097 +0000 UTC m=+0.169680160 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3) Oct 5 06:17:12 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:17:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002323629868090452 of space, bias 4.0, pg target 1.849609375 quantized to 16 (current 16) Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:17:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:17:13 localhost nova_compute[297130]: 2025-10-05 10:17:13.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:14 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 61 KiB/s wr, 3 op/s Oct 5 06:17:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:16 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s Oct 5 06:17:16 localhost nova_compute[297130]: 2025-10-05 10:17:16.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:16 localhost openstack_network_exporter[250246]: ERROR 10:17:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:16 localhost openstack_network_exporter[250246]: ERROR 10:17:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:16 localhost openstack_network_exporter[250246]: ERROR 10:17:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:17:16 localhost openstack_network_exporter[250246]: ERROR 10:17:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:17:16 localhost openstack_network_exporter[250246]: Oct 5 06:17:16 localhost openstack_network_exporter[250246]: ERROR 10:17:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:17:16 localhost openstack_network_exporter[250246]: Oct 5 06:17:18 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 52 KiB/s wr, 3 op/s Oct 5 06:17:18 localhost nova_compute[297130]: 2025-10-05 10:17:18.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:18 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup snapshot rm", "vol_name": "cephfs", "group_name": "33e84d8e-ab85-4d69-ac01-1fc46cfecbc1", "snap_name": "e2118344-bfc3-417b-a560-bb9b84c3172c", "force": true, "format": "json"}]: dispatch Oct 5 06:17:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:33e84d8e-ab85-4d69-ac01-1fc46cfecbc1, prefix:fs subvolumegroup snapshot rm, snap_name:e2118344-bfc3-417b-a560-bb9b84c3172c, vol_name:cephfs) < "" Oct 5 06:17:18 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_snapshot_rm(force:True, format:json, group_name:33e84d8e-ab85-4d69-ac01-1fc46cfecbc1, prefix:fs subvolumegroup snapshot rm, snap_name:e2118344-bfc3-417b-a560-bb9b84c3172c, vol_name:cephfs) < "" Oct 5 06:17:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:17:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1839722192' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:17:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:17:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1839722192' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:17:20 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 40 KiB/s wr, 2 op/s Oct 5 06:17:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:17:20.413 163201 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:17:20.414 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:20 localhost ovn_metadata_agent[163196]: 2025-10-05 10:17:20.414 163201 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:21 localhost ceph-mgr[301363]: log_channel(audit) log [DBG] : from='client.15864 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "33e84d8e-ab85-4d69-ac01-1fc46cfecbc1", "force": true, "format": "json"}]: dispatch Oct 5 06:17:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:33e84d8e-ab85-4d69-ac01-1fc46cfecbc1, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:17:21 localhost ceph-mgr[301363]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:33e84d8e-ab85-4d69-ac01-1fc46cfecbc1, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Oct 5 06:17:21 localhost nova_compute[297130]: 2025-10-05 10:17:21.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:22 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 40 KiB/s wr, 2 op/s Oct 5 06:17:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:17:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:17:22 localhost podman[342973]: 2025-10-05 10:17:22.916066821 +0000 UTC m=+0.083784437 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute) Oct 5 06:17:22 localhost podman[342973]: 2025-10-05 10:17:22.953742566 +0000 UTC m=+0.121460212 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.build-date=20251001) Oct 5 06:17:22 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:17:23 localhost podman[342974]: 2025-10-05 10:17:23.014911253 +0000 UTC m=+0.179313580 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:17:23 localhost podman[342974]: 2025-10-05 10:17:23.051217641 +0000 UTC m=+0.215619978 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:23 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:17:23 localhost nova_compute[297130]: 2025-10-05 10:17:23.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:24 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 42 KiB/s wr, 2 op/s Oct 5 06:17:24 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:24 localhost nova_compute[297130]: 2025-10-05 10:17:24.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:24 localhost ovn_metadata_agent[163196]: 2025-10-05 10:17:24.734 163201 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '46:05:d5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:3f:fb:9b:8c:40'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Oct 5 06:17:24 localhost ovn_metadata_agent[163196]: 2025-10-05 10:17:24.735 163201 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Oct 5 06:17:24 localhost nova_compute[297130]: 2025-10-05 10:17:24.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:17:24 localhost systemd[1]: tmp-crun.kwLcng.mount: Deactivated successfully. Oct 5 06:17:24 localhost podman[343011]: 2025-10-05 10:17:24.911002871 +0000 UTC m=+0.079083121 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container) Oct 5 06:17:24 localhost podman[343011]: 2025-10-05 10:17:24.927255359 +0000 UTC m=+0.095335609 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container) Oct 5 06:17:24 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:17:26 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 1 op/s Oct 5 06:17:26 localhost podman[248157]: time="2025-10-05T10:17:26Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:17:26 localhost podman[248157]: @ - - [05/Oct/2025:10:17:26 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:17:26 localhost podman[248157]: @ - - [05/Oct/2025:10:17:26 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19377 "" "Go-http-client/1.1" Oct 5 06:17:26 localhost nova_compute[297130]: 2025-10-05 10:17:26.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:26 localhost nova_compute[297130]: 2025-10-05 10:17:26.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:26 localhost nova_compute[297130]: 2025-10-05 10:17:26.273 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:26 localhost nova_compute[297130]: 2025-10-05 10:17:26.274 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Oct 5 06:17:26 localhost nova_compute[297130]: 2025-10-05 10:17:26.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:17:27 localhost podman[343029]: 2025-10-05 10:17:27.914376526 +0000 UTC m=+0.082642717 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true) Oct 5 06:17:27 localhost podman[343029]: 2025-10-05 10:17:27.944282642 +0000 UTC m=+0.112548843 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001) Oct 5 06:17:27 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:17:28 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 21 KiB/s wr, 1 op/s Oct 5 06:17:28 localhost nova_compute[297130]: 2025-10-05 10:17:28.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:28 localhost nova_compute[297130]: 2025-10-05 10:17:28.298 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Oct 5 06:17:28 localhost ceph-mon[316511]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4729 writes, 36K keys, 4729 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.04 MB/s#012Cumulative WAL: 4729 writes, 4729 syncs, 1.00 writes per sync, written: 0.05 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2518 writes, 13K keys, 2518 commit groups, 1.0 writes per commit group, ingest: 17.31 MB, 0.03 MB/s#012Interval WAL: 2518 writes, 2518 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 152.0 0.24 0.10 16 0.015 0 0 0.0 0.0#012 L6 1/0 15.77 MB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 6.6 165.4 152.4 1.56 0.71 15 0.104 202K 7725 0.0 0.0#012 Sum 1/0 15.77 MB 0.0 0.3 0.0 0.2 0.3 0.1 0.0 7.6 143.6 152.3 1.80 0.81 31 0.058 202K 7725 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 12.5 134.2 136.4 0.90 0.39 14 0.065 101K 3764 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 0.0 165.4 152.4 1.56 0.71 15 0.104 202K 7725 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 153.4 0.24 0.10 15 0.016 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.035, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.23 MB/s write, 0.25 GB read, 0.22 MB/s read, 1.8 seconds#012Interval compaction: 0.12 GB write, 0.21 MB/s write, 0.12 GB read, 0.20 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5603a6bb1350#2 capacity: 304.00 MB usage: 22.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000164 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1304,20.71 MB,6.81343%) FilterBlock(31,597.55 KB,0.191955%) IndexBlock(31,743.08 KB,0.238705%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Oct 5 06:17:29 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:29 localhost nova_compute[297130]: 2025-10-05 10:17:29.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:29 localhost nova_compute[297130]: 2025-10-05 10:17:29.268 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:30 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 8.3 KiB/s wr, 0 op/s Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.294 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.294 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.295 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.295 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Auditing locally available compute resources for np0005471152.localdomain (node: np0005471152.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.295 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:17:30 localhost ovn_controller[157556]: 2025-10-05T10:17:30Z|00203|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Oct 5 06:17:30 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Oct 5 06:17:30 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/282501862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.741 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.917 2 WARNING nova.virt.libvirt.driver [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.918 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Hypervisor/Node resource view: name=np0005471152.localdomain free_ram=11436MB free_disk=41.836944580078125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.918 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Oct 5 06:17:30 localhost nova_compute[297130]: 2025-10-05 10:17:30.919 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.103 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.104 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Final resource view: name=np0005471152.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.357 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:31 localhost ovn_metadata_agent[163196]: 2025-10-05 10:17:31.737 163201 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2abb7f3-ae8d-4817-a99b-01536f41e92b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.819 2 DEBUG oslo_concurrency.processutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.825 2 DEBUG nova.compute.provider_tree [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed in ProviderTree for provider: 36221146-244b-49ab-8700-5471fa19d0c5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.848 2 DEBUG nova.scheduler.client.report [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Inventory has not changed for provider 36221146-244b-49ab-8700-5471fa19d0c5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.850 2 DEBUG nova.compute.resource_tracker [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Compute_service record updated for np0005471152.localdomain:np0005471152.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Oct 5 06:17:31 localhost nova_compute[297130]: 2025-10-05 10:17:31.851 2 DEBUG oslo_concurrency.lockutils [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Oct 5 06:17:32 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 8.3 KiB/s wr, 0 op/s Oct 5 06:17:32 localhost nova_compute[297130]: 2025-10-05 10:17:32.852 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:32 localhost nova_compute[297130]: 2025-10-05 10:17:32.853 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Oct 5 06:17:32 localhost nova_compute[297130]: 2025-10-05 10:17:32.853 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Oct 5 06:17:32 localhost nova_compute[297130]: 2025-10-05 10:17:32.867 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Oct 5 06:17:33 localhost nova_compute[297130]: 2025-10-05 10:17:33.055 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:33 localhost nova_compute[297130]: 2025-10-05 10:17:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:33 localhost nova_compute[297130]: 2025-10-05 10:17:33.274 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:33 localhost nova_compute[297130]: 2025-10-05 10:17:33.275 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:33 localhost nova_compute[297130]: 2025-10-05 10:17:33.275 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Oct 5 06:17:34 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 8.5 KiB/s wr, 0 op/s Oct 5 06:17:34 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:35 localhost nova_compute[297130]: 2025-10-05 10:17:35.272 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:35 localhost sshd[343091]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:17:36 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.4 KiB/s wr, 0 op/s Oct 5 06:17:36 localhost nova_compute[297130]: 2025-10-05 10:17:36.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:38 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.4 KiB/s wr, 0 op/s Oct 5 06:17:38 localhost nova_compute[297130]: 2025-10-05 10:17:38.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:38 localhost nova_compute[297130]: 2025-10-05 10:17:38.287 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:38 localhost nova_compute[297130]: 2025-10-05 10:17:38.287 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Oct 5 06:17:38 localhost nova_compute[297130]: 2025-10-05 10:17:38.306 2 DEBUG nova.compute.manager [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Oct 5 06:17:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:17:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:17:38 localhost podman[343096]: 2025-10-05 10:17:38.924843017 +0000 UTC m=+0.084954869 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:17:38 localhost podman[343096]: 2025-10-05 10:17:38.938792622 +0000 UTC m=+0.098904474 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:38 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:17:38 localhost podman[343095]: 2025-10-05 10:17:38.983408114 +0000 UTC m=+0.147667387 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true) Oct 5 06:17:39 localhost podman[343095]: 2025-10-05 10:17:39.026224578 +0000 UTC m=+0.190483801 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, tcib_managed=true, managed_by=edpm_ansible) Oct 5 06:17:39 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:17:39 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:40 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s wr, 0 op/s Oct 5 06:17:41 localhost nova_compute[297130]: 2025-10-05 10:17:41.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:17:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:17:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:17:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:17:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:17:41 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:17:42 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s wr, 0 op/s Oct 5 06:17:42 localhost nova_compute[297130]: 2025-10-05 10:17:42.056 2 DEBUG oslo_service.periodic_task [None req-17997b2a-72ed-4cf7-8f39-f1f3b6e89b1c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Oct 5 06:17:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:17:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:17:42 localhost podman[343140]: 2025-10-05 10:17:42.917586195 +0000 UTC m=+0.077217441 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=iscsid) Oct 5 06:17:42 localhost podman[343140]: 2025-10-05 10:17:42.925475877 +0000 UTC m=+0.085107153 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:17:42 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:17:42 localhost podman[343141]: 2025-10-05 10:17:42.98610018 +0000 UTC m=+0.143243098 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251001, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Oct 5 06:17:43 localhost podman[343141]: 2025-10-05 10:17:43.092266239 +0000 UTC m=+0.249409137 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Oct 5 06:17:43 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:17:43 localhost nova_compute[297130]: 2025-10-05 10:17:43.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:44 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s wr, 0 op/s Oct 5 06:17:44 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:45 localhost sshd[343185]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:17:46 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:46 localhost nova_compute[297130]: 2025-10-05 10:17:46.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Oct 5 06:17:46 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "config generate-minimal-conf"} : dispatch Oct 5 06:17:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Oct 5 06:17:46 localhost ceph-mon[316511]: log_channel(audit) log [INF] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:17:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Oct 5 06:17:46 localhost ceph-mgr[301363]: [progress INFO root] update: starting ev e4701c18-8ae9-4dcd-be57-f304bc9053ed (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:17:46 localhost ceph-mgr[301363]: [progress INFO root] complete: finished ev e4701c18-8ae9-4dcd-be57-f304bc9053ed (Updating node-proxy deployment (+3 -> 3)) Oct 5 06:17:46 localhost ceph-mgr[301363]: [progress INFO root] Completed event e4701c18-8ae9-4dcd-be57-f304bc9053ed (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Oct 5 06:17:46 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Oct 5 06:17:46 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Oct 5 06:17:46 localhost openstack_network_exporter[250246]: ERROR 10:17:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:46 localhost openstack_network_exporter[250246]: ERROR 10:17:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:17:46 localhost openstack_network_exporter[250246]: ERROR 10:17:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:17:46 localhost openstack_network_exporter[250246]: ERROR 10:17:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:17:46 localhost openstack_network_exporter[250246]: Oct 5 06:17:46 localhost openstack_network_exporter[250246]: ERROR 10:17:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:17:46 localhost openstack_network_exporter[250246]: Oct 5 06:17:47 localhost ceph-mgr[301363]: [progress INFO root] Writing back 50 completed events Oct 5 06:17:47 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Oct 5 06:17:47 localhost ceph-mon[316511]: from='mgr.34408 172.18.0.108:0/4194771758' entity='mgr.np0005471152.kbhlus' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Oct 5 06:17:47 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:17:47 localhost ceph-mon[316511]: from='mgr.34408 ' entity='mgr.np0005471152.kbhlus' Oct 5 06:17:48 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:48 localhost nova_compute[297130]: 2025-10-05 10:17:48.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:49 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:50 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:51 localhost nova_compute[297130]: 2025-10-05 10:17:51.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:52 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:53 localhost nova_compute[297130]: 2025-10-05 10:17:53.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c. Oct 5 06:17:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114. Oct 5 06:17:53 localhost podman[343276]: 2025-10-05 10:17:53.917911082 +0000 UTC m=+0.081913106 container health_status ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Oct 5 06:17:53 localhost podman[343276]: 2025-10-05 10:17:53.92636587 +0000 UTC m=+0.090367944 container exec_died ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Oct 5 06:17:53 localhost systemd[1]: ca1a72dbacc6bcae014161d3ad4280b25dd25f878c5bed3a716ccc60c3097114.service: Deactivated successfully. Oct 5 06:17:54 localhost podman[343275]: 2025-10-05 10:17:54.00696602 +0000 UTC m=+0.171627532 container health_status b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:17:54 localhost podman[343275]: 2025-10-05 10:17:54.021109121 +0000 UTC m=+0.185770663 container exec_died b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm) Oct 5 06:17:54 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:54 localhost systemd[1]: b7f28ca4b3875204d5f69ca9d31ef0ca198405c883aefa3415578ccc3de4358c.service: Deactivated successfully. Oct 5 06:17:54 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd. Oct 5 06:17:55 localhost podman[343315]: 2025-10-05 10:17:55.151640055 +0000 UTC m=+0.085778932 container health_status 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-type=git) Oct 5 06:17:55 localhost podman[343315]: 2025-10-05 10:17:55.164713046 +0000 UTC m=+0.098851923 container exec_died 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Oct 5 06:17:55 localhost systemd[1]: 9f8c4749e2457597ab7c79b799bf7d1289533ba3a112768b4dd41c120ca7d5dd.service: Deactivated successfully. Oct 5 06:17:55 localhost sshd[343335]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:17:56 localhost podman[248157]: time="2025-10-05T10:17:56Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Oct 5 06:17:56 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:56 localhost podman[248157]: @ - - [05/Oct/2025:10:17:56 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146316 "" "Go-http-client/1.1" Oct 5 06:17:56 localhost podman[248157]: @ - - [05/Oct/2025:10:17:56 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19380 "" "Go-http-client/1.1" Oct 5 06:17:56 localhost nova_compute[297130]: 2025-10-05 10:17:56.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:58 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:17:58 localhost nova_compute[297130]: 2025-10-05 10:17:58.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:17:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01. Oct 5 06:17:58 localhost podman[343340]: 2025-10-05 10:17:58.91151998 +0000 UTC m=+0.075307679 container health_status 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Oct 5 06:17:58 localhost podman[343340]: 2025-10-05 10:17:58.939478023 +0000 UTC m=+0.103265792 container exec_died 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:17:58 localhost systemd[1]: 2360c85067fc8f37b08130cc84f4c3831bd0c2fffae3f7caacd4f6141c15be01.service: Deactivated successfully. Oct 5 06:17:58 localhost sshd[343358]: main: sshd: ssh-rsa algorithm is disabled Oct 5 06:17:59 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:17:59 localhost systemd-logind[760]: New session 76 of user zuul. Oct 5 06:17:59 localhost systemd[1]: Started Session 76 of User zuul. Oct 5 06:17:59 localhost python3[343381]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163efc-24cc-abfa-20d5-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Oct 5 06:18:00 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:00 localhost systemd[1]: session-76.scope: Deactivated successfully. Oct 5 06:18:00 localhost systemd-logind[760]: Session 76 logged out. Waiting for processes to exit. Oct 5 06:18:00 localhost systemd-logind[760]: Removed session 76. Oct 5 06:18:01 localhost nova_compute[297130]: 2025-10-05 10:18:01.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:02 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:03 localhost nova_compute[297130]: 2025-10-05 10:18:03.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:04 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:04 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:06 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:06 localhost nova_compute[297130]: 2025-10-05 10:18:06.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:08 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:08 localhost nova_compute[297130]: 2025-10-05 10:18:08.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:09 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f. Oct 5 06:18:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e. Oct 5 06:18:09 localhost podman[343387]: 2025-10-05 10:18:09.932500234 +0000 UTC m=+0.093944520 container health_status 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Oct 5 06:18:09 localhost podman[343388]: 2025-10-05 10:18:09.972497731 +0000 UTC m=+0.134074651 container health_status ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Oct 5 06:18:09 localhost podman[343387]: 2025-10-05 10:18:09.99807209 +0000 UTC m=+0.159516386 container exec_died 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Oct 5 06:18:10 localhost podman[343388]: 2025-10-05 10:18:10.006880548 +0000 UTC m=+0.168457448 container exec_died ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Oct 5 06:18:10 localhost systemd[1]: 508407177dfcd0cdc8b45b438f3b2ae1f5ac512cff4e0963cc261fc9213f717f.service: Deactivated successfully. Oct 5 06:18:10 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:10 localhost systemd[1]: ee2f0c61edad179efe89b754cf31c40a57334f245540ab8b3b21034b3402d42e.service: Deactivated successfully. Oct 5 06:18:11 localhost ceph-mgr[301363]: [balancer INFO root] Optimize plan auto_2025-10-05_10:18:11 Oct 5 06:18:11 localhost ceph-mgr[301363]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Oct 5 06:18:11 localhost ceph-mgr[301363]: [balancer INFO root] do_upmap Oct 5 06:18:11 localhost ceph-mgr[301363]: [balancer INFO root] pools ['manila_data', 'images', 'backups', 'manila_metadata', 'volumes', 'vms', '.mgr'] Oct 5 06:18:11 localhost ceph-mgr[301363]: [balancer INFO root] prepared 0/10 changes Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:18:11 localhost nova_compute[297130]: 2025-10-05 10:18:11.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [] Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] scanning for idle connections.. Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 06:18:11 localhost ceph-mgr[301363]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Oct 5 06:18:12 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] _maybe_adjust Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033260922668900054 of space, bias 1.0, pg target 0.6652184533780011 quantized to 32 (current 32) Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Oct 5 06:18:12 localhost ceph-mgr[301363]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0023617981400055835 of space, bias 4.0, pg target 1.8799913194444446 quantized to 16 (current 16) Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: vms, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: volumes, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: images, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:18:12 localhost ceph-mgr[301363]: [rbd_support INFO root] load_schedules: backups, start_after= Oct 5 06:18:13 localhost nova_compute[297130]: 2025-10-05 10:18:13.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6. Oct 5 06:18:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c. Oct 5 06:18:13 localhost podman[343428]: 2025-10-05 10:18:13.920006169 +0000 UTC m=+0.072620426 container health_status 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible) Oct 5 06:18:13 localhost podman[343428]: 2025-10-05 10:18:13.987216039 +0000 UTC m=+0.139830316 container exec_died 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Oct 5 06:18:14 localhost systemd[1]: 70183d0399c08f92b9f1b94dc05d4ffede6a461a3afe5c652983618a7a4c4e0c.service: Deactivated successfully. Oct 5 06:18:14 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Oct 5 06:18:14 localhost podman[343427]: 2025-10-05 10:18:13.98949367 +0000 UTC m=+0.141800788 container health_status 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image) Oct 5 06:18:14 localhost podman[343427]: 2025-10-05 10:18:14.071858129 +0000 UTC m=+0.224165187 container exec_died 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=88dc57612f447daadb492dcf3ad854ac, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Oct 5 06:18:14 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:14 localhost systemd[1]: 289ba0dc454d5fd830c1ac301f0489ed50b12ab503386ae52e6a6eb7b1afb7d6.service: Deactivated successfully. Oct 5 06:18:16 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Oct 5 06:18:16 localhost openstack_network_exporter[250246]: ERROR 10:18:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Oct 5 06:18:16 localhost openstack_network_exporter[250246]: ERROR 10:18:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:18:16 localhost openstack_network_exporter[250246]: ERROR 10:18:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Oct 5 06:18:16 localhost openstack_network_exporter[250246]: ERROR 10:18:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Oct 5 06:18:16 localhost openstack_network_exporter[250246]: Oct 5 06:18:16 localhost openstack_network_exporter[250246]: ERROR 10:18:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Oct 5 06:18:16 localhost openstack_network_exporter[250246]: Oct 5 06:18:16 localhost nova_compute[297130]: 2025-10-05 10:18:16.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:18 localhost ceph-mgr[301363]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Oct 5 06:18:18 localhost nova_compute[297130]: 2025-10-05 10:18:18.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Oct 5 06:18:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Oct 5 06:18:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Oct 5 06:18:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3146163871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Oct 5 06:18:19 localhost ceph-mon[316511]: mon.np0005471152@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Oct 5 06:18:19 localhost ceph-mon[316511]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3146163871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Oct 5 06:18:19 localhost sshd[343471]: main: sshd: ssh-rsa algorithm is disabled